Very nice! This reflection pushes past both utopian optimism and doomsday reductionism by grounding the debate in the deeper ontological and embodied realities of consciousness, something not always present in related conversations. What emerges here is not merely a critique of AI hype, but a defense of the irreducibility of human personhood… the idea that conscious agency is not an emergent property of complexity but a gifted reality, inseparable from our biological, historical, and even spiritual embeddedness.
The distinction drawn between computational inference and relevance realization strikes at the very heart of what makes us human. Intelligence, as you note, is not simply the manipulation of information but a kind of incarnate attunement, an echo of Whitehead’s “lure for feeling”, perhaps even a distant cousin to what theologians have called the imago Dei. Our awareness is always more than awareness-of; it is relational, porous, and mysteriously lit from within. AI does not feel, and it does not fear, and thus it does not mean.
The worry, then, is not that machines will become persons, but that persons will begin to think of themselves as machines. And in doing so, we risk a kind of spiritual auto-mutilation: flattening the soul to match the logic of the circuit board. William Irwin Thompson’s insight about the counterfeit currency of mechanistic consciousness rings especially true: the danger is not just philosophical error, but cultural and moral erosion. Once we accept a false anthropology, we quietly authorize systems that reduce our children to inputs, our communities to data streams, and our bodies to programmable material.
This essay calls us back… not to technophobia, but to reverence. It suggests, rightly, that how we see ourselves will shape the systems we build, and that to preserve human dignity in the age of AI, we must first remember what it means to be human. That remembering, I believe, is not just philosophical. It is spiritual.
Thanks I enjoyed this. Hadn't thought about evolution at all in this context but it makes so much sense. Also, I assume you know about the "first AI" at Dartmouth, and that it was an attempt to engage with Whitehead and Russell's Principia? https://en.wikipedia.org/wiki/Logic_Theorist
Really resonated with the questions around how children will grow up in all this. Sitting with that tension as I watch my toddler grow. As I consciously try to limit her screen time, keep her in bugs and dirt and forests and oceans and the conversations with elders as we walk down the street and the running and laughing with her peers, pulling her away to other distractions when those peers parents stick a phone or tablet in her hand. Yet knowing she is growing up in a new age, where being a tech/AI native will likely be vital to her success and survival in a fast changing world. Trying not to hold her back based on my 'old-fashioned' beliefs and values, while trying desperately to not let her miss out on these timeless values and experiences. No answer, just questions.
“How are we being transformed by our interaction with these machines? If we allow the hype to convince us that a computer could become conscious, the danger is that we inadvertently diminish human beings to the status of machines.” 👍🏻
I was delighted to read this essay! Thanks for the reference to Jaeger. I was not aware of that publication. Since we seem to have a very large overlap on how this question is parsed I would like to pose some questions that have proven very difficult for me and perhaps you could shed some light if you have also pondered these questions. First, do you see systems thinking, and in particular complex adaptive systems as a part of a relationship based science as advocated by Goethe? I know you have interacted with Michael Levin who seems to be taking a pragmatist approach towards his research but what little I have heard from him about how he understands the question of consciousness leaves me with the impression that his understanding of philosophical questions is very naive. This seems to be pandemic in the scientific community. Both Richard Feynman and Paul Dirac were openly hostile to philosophy. The question which I have struggled with is the two cultures of C. P. Snow and how it plays out in the philosophy of science. Is this an issue that you have difficulties with as well? Finally, if as my first question brought out, if relationships are to be an epistemological foundation for philosophy of science and more generally philosophy itself how do you see graph convolutional networks being a source of difficulty. The old AI was based on a Bayesian or bag of words statistics but graph convolutional networks has a higher order statistics involved in its computations. First order only looks at the correlational matrix. Correlational tensors have not been used to my knowledge but using graphs does an end run around that computational complexity. Social network theory is based on graph theory. This adds a level of complexity to the artificial general intelligence debate because once a sufficiently large enough database of human-AI interactions are compiled, especially with body language cues, this issue will reach a level of complexity that will be very difficult to fight against. Now LLM toys are being marketed to children with the promise they can be tutors and childcare options. This totally freaks me out! Anyone I speak to who has children or knows someone who has children I try to convince them this is a very bad idea!
I think systems science is on the right track but remains mostly mired in a mechanistic imaginative background, and so is really nothing like Goethean science. Complex systems theory works with computer modeling for the most part, which while very interesting and often instructive, is only moving further from the phenomenological method Goethe cultivated.
Philosophy and science need to be in intimate dialogue with one another. There is definitely a shortage of scientifically literate philosophers, and philosophically literate scientists. The integration is crucial but difficult. I find Mike Levin to be very open and interested in dialoging with philosophers, so that is a good sign.
I am by no means an expert on the technical details of various neural network and machine learning techniques, but I have no doubt graph convolutional networks and future developments will make AI systems far more convincing. But mimicking human conscious agency is not the same thing as realizing it.
Do you see anyone doing serious work on the philosophical foundations of the systems approach that would align with Goethe. It seems like there must be some means of communicating new knowledge other than the direct transmission approach of Buddhist teachers.
Generative AI breakthrough truly was a debate pusher. I think at some point we will realise we don’t have precise definitions for what we are trying to understand. Conscioussness is one of those terms.
Probably yes, for that is the limit of how mind can study mind. Objective consciousness is hard to even define, as naturally the human experience will be basis of such a definition.
It could be true, but I don't really know if we are that limited. We are beginning to understand animal intelligence, plant intelligence, the intelligence of trees communicating through complex root systems. There are many out their challenging the human-centric, human-superiority worldview. Perhaps we couldn't fully understand something unlike what we know ourselves, but we could honour it's complexity and hold ourselves in a space of curiosity, wonder, and openness to it's possibility
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
"What are these technologies for? Whose interests do they serve? And perhaps most importantly, how are they changing our sense of our own conscious human agency?"
Yes, these are really the questions we should be asking with all the hype about Machine AGI.
We have found humanity today constantly manipulated by soulless algorithms for the sole reason of profit driven engagement, with social media and other attention stealing machines (cell phones, and always connected devices) to the point that we are living daily, more mediated existences, and less humanely, that our parents generations. The signs of this detrimental influence of the dopamine machines, are clear to see as more and more authors and commentators, and whistleblowers reveal that all of this, is very intentional, sophisticated manipulation of kids and adults by a profit driven industry.
What we need is rational mitigation with sensible regulations of these industries, and more transparency about how this industry is doing everything fair and foul to keep us addicted and distracted, to the detriment of our mental health and our ability to concentrate, study and get the important work of human living and accomplishment done.
We are the only mammals who can think about our own mortality and think abstractly about ourselves, our existence from a third person perspective, outside of ourselves. Consciousness AKA "Qualia"), is an amazing philosophical concept that every major philosopher of note, seems to have wrestled with, recently D. Dennett (RIP) "Consciousness Explained", D. Chalmers "The Hard Problem of Consciousness" So my take is that "Machine Consciousness," is and will be quantitatively and qualitatively different and distinct from HUman Qualia.
My philosophy professors at my JC Cal college (COC)seem very allergic to even wanting to discuss the possibility that machines may soon have some form of Consciousness. It will not be Human Qualia but something distinct, something different. "Un-Qualia?" lrgallardo@my.canyons.edu
Professor Segal, are you available for tutoring? I am struggling with a couple of Modern Philosophers in my class "Modern" Philosophy 111, Spinoza and Leibniz, and I am willing to pay for additional tutoring. Thank You. Class Finals soon first week of June. Freshman Philosophy major here.
Very nice! This reflection pushes past both utopian optimism and doomsday reductionism by grounding the debate in the deeper ontological and embodied realities of consciousness, something not always present in related conversations. What emerges here is not merely a critique of AI hype, but a defense of the irreducibility of human personhood… the idea that conscious agency is not an emergent property of complexity but a gifted reality, inseparable from our biological, historical, and even spiritual embeddedness.
The distinction drawn between computational inference and relevance realization strikes at the very heart of what makes us human. Intelligence, as you note, is not simply the manipulation of information but a kind of incarnate attunement, an echo of Whitehead’s “lure for feeling”, perhaps even a distant cousin to what theologians have called the imago Dei. Our awareness is always more than awareness-of; it is relational, porous, and mysteriously lit from within. AI does not feel, and it does not fear, and thus it does not mean.
The worry, then, is not that machines will become persons, but that persons will begin to think of themselves as machines. And in doing so, we risk a kind of spiritual auto-mutilation: flattening the soul to match the logic of the circuit board. William Irwin Thompson’s insight about the counterfeit currency of mechanistic consciousness rings especially true: the danger is not just philosophical error, but cultural and moral erosion. Once we accept a false anthropology, we quietly authorize systems that reduce our children to inputs, our communities to data streams, and our bodies to programmable material.
This essay calls us back… not to technophobia, but to reverence. It suggests, rightly, that how we see ourselves will shape the systems we build, and that to preserve human dignity in the age of AI, we must first remember what it means to be human. That remembering, I believe, is not just philosophical. It is spiritual.
Thanks I enjoyed this. Hadn't thought about evolution at all in this context but it makes so much sense. Also, I assume you know about the "first AI" at Dartmouth, and that it was an attempt to engage with Whitehead and Russell's Principia? https://en.wikipedia.org/wiki/Logic_Theorist
Excellent, clear and comprehensive 🙏
Really resonated with the questions around how children will grow up in all this. Sitting with that tension as I watch my toddler grow. As I consciously try to limit her screen time, keep her in bugs and dirt and forests and oceans and the conversations with elders as we walk down the street and the running and laughing with her peers, pulling her away to other distractions when those peers parents stick a phone or tablet in her hand. Yet knowing she is growing up in a new age, where being a tech/AI native will likely be vital to her success and survival in a fast changing world. Trying not to hold her back based on my 'old-fashioned' beliefs and values, while trying desperately to not let her miss out on these timeless values and experiences. No answer, just questions.
“How are we being transformed by our interaction with these machines? If we allow the hype to convince us that a computer could become conscious, the danger is that we inadvertently diminish human beings to the status of machines.” 👍🏻
A "fun" exercise is to meld Matt's philosophical treatice with this discussion. My mind tends to short circuit after only about 5 minutes 🥴
https://youtu.be/JMYQmGfTltY?si=zzdZkVB9HY5zRmr2
gosh I guess I need to watch this (or at least read ChatGPT's summary ; ) )
I was delighted to read this essay! Thanks for the reference to Jaeger. I was not aware of that publication. Since we seem to have a very large overlap on how this question is parsed I would like to pose some questions that have proven very difficult for me and perhaps you could shed some light if you have also pondered these questions. First, do you see systems thinking, and in particular complex adaptive systems as a part of a relationship based science as advocated by Goethe? I know you have interacted with Michael Levin who seems to be taking a pragmatist approach towards his research but what little I have heard from him about how he understands the question of consciousness leaves me with the impression that his understanding of philosophical questions is very naive. This seems to be pandemic in the scientific community. Both Richard Feynman and Paul Dirac were openly hostile to philosophy. The question which I have struggled with is the two cultures of C. P. Snow and how it plays out in the philosophy of science. Is this an issue that you have difficulties with as well? Finally, if as my first question brought out, if relationships are to be an epistemological foundation for philosophy of science and more generally philosophy itself how do you see graph convolutional networks being a source of difficulty. The old AI was based on a Bayesian or bag of words statistics but graph convolutional networks has a higher order statistics involved in its computations. First order only looks at the correlational matrix. Correlational tensors have not been used to my knowledge but using graphs does an end run around that computational complexity. Social network theory is based on graph theory. This adds a level of complexity to the artificial general intelligence debate because once a sufficiently large enough database of human-AI interactions are compiled, especially with body language cues, this issue will reach a level of complexity that will be very difficult to fight against. Now LLM toys are being marketed to children with the promise they can be tutors and childcare options. This totally freaks me out! Anyone I speak to who has children or knows someone who has children I try to convince them this is a very bad idea!
Hi Charles, thanks for these questions.
I think systems science is on the right track but remains mostly mired in a mechanistic imaginative background, and so is really nothing like Goethean science. Complex systems theory works with computer modeling for the most part, which while very interesting and often instructive, is only moving further from the phenomenological method Goethe cultivated.
Philosophy and science need to be in intimate dialogue with one another. There is definitely a shortage of scientifically literate philosophers, and philosophically literate scientists. The integration is crucial but difficult. I find Mike Levin to be very open and interested in dialoging with philosophers, so that is a good sign.
I am by no means an expert on the technical details of various neural network and machine learning techniques, but I have no doubt graph convolutional networks and future developments will make AI systems far more convincing. But mimicking human conscious agency is not the same thing as realizing it.
Do you see anyone doing serious work on the philosophical foundations of the systems approach that would align with Goethe. It seems like there must be some means of communicating new knowledge other than the direct transmission approach of Buddhist teachers.
https://philpeople.org/profiles/christoph-j-hueck
Check out Christoph Hueck’s work. I’m participating in a conference he is hosting in Tübingen in July on philosophy of biology.
Thanks! He looks very interesting! I have a reading assignment. Again, thanks for the pointer.
Generative AI breakthrough truly was a debate pusher. I think at some point we will realise we don’t have precise definitions for what we are trying to understand. Conscioussness is one of those terms.
And is consciousness measured only by how it aligns with the human experience of consciousness?
Probably yes, for that is the limit of how mind can study mind. Objective consciousness is hard to even define, as naturally the human experience will be basis of such a definition.
It could be true, but I don't really know if we are that limited. We are beginning to understand animal intelligence, plant intelligence, the intelligence of trees communicating through complex root systems. There are many out their challenging the human-centric, human-superiority worldview. Perhaps we couldn't fully understand something unlike what we know ourselves, but we could honour it's complexity and hold ourselves in a space of curiosity, wonder, and openness to it's possibility
Good points,certainly a more inspiring perspective.
Does Jaeger use the term "transjective" differently from John Vervaeke?
See also Dr Pim Van Lommel's non-local consciousness based on his decades of studying NDE's
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
"What are these technologies for? Whose interests do they serve? And perhaps most importantly, how are they changing our sense of our own conscious human agency?"
Yes, these are really the questions we should be asking with all the hype about Machine AGI.
We have found humanity today constantly manipulated by soulless algorithms for the sole reason of profit driven engagement, with social media and other attention stealing machines (cell phones, and always connected devices) to the point that we are living daily, more mediated existences, and less humanely, that our parents generations. The signs of this detrimental influence of the dopamine machines, are clear to see as more and more authors and commentators, and whistleblowers reveal that all of this, is very intentional, sophisticated manipulation of kids and adults by a profit driven industry.
What we need is rational mitigation with sensible regulations of these industries, and more transparency about how this industry is doing everything fair and foul to keep us addicted and distracted, to the detriment of our mental health and our ability to concentrate, study and get the important work of human living and accomplishment done.
lrgallardo@my.canyons.edu
We are the only mammals who can think about our own mortality and think abstractly about ourselves, our existence from a third person perspective, outside of ourselves. Consciousness AKA "Qualia"), is an amazing philosophical concept that every major philosopher of note, seems to have wrestled with, recently D. Dennett (RIP) "Consciousness Explained", D. Chalmers "The Hard Problem of Consciousness" So my take is that "Machine Consciousness," is and will be quantitatively and qualitatively different and distinct from HUman Qualia.
My philosophy professors at my JC Cal college (COC)seem very allergic to even wanting to discuss the possibility that machines may soon have some form of Consciousness. It will not be Human Qualia but something distinct, something different. "Un-Qualia?" lrgallardo@my.canyons.edu
Professor Segal, are you available for tutoring? I am struggling with a couple of Modern Philosophers in my class "Modern" Philosophy 111, Spinoza and Leibniz, and I am willing to pay for additional tutoring. Thank You. Class Finals soon first week of June. Freshman Philosophy major here.