I sat down with my friend Kent Bye earlier today to discuss the intensifying entanglement of human consciousness with machine intelligences. He read my recent chapter on the philosophical implications of AI and asked some great questions that elicited fresh thoughts. The podcast should be posted soon, but for now below is a preview of some of what we discussed (based on my own extensive editing of the transcript).
I do process philosophy, which is to say I attempt to apply process philosophy across as many different disciplines in the sciences—natural sciences and social sciences—as I can. But primarily, I would say I study consciousness and its place in the evolutionary history of the universe. That's my orienting frame for all of the work that I do.
I started as an undergrad studying cognitive science at the University of Central Florida, where I studied with people like Shawn Gallagher, who, if you’re in cognitive science, you’ll recognize as someone who's contributing to the embodied and enactive point of view (he’s since relocated to University of Memphis). Very early on, I was exposed to the relevance of embodied phenomenology for the study of cognition and consciousness, which is still in contrast to the mainstream in cognitive science, which is predominantly computationalist. This mainstream approach more or less imagines the mind as the software of the brain, which is the hardware. The computational metaphor—and it is a metaphor, a hypothetical model rather than an empirical finding—drives so much of the research to this day in the cognitive sciences.
When I graduated with my undergrad in 2007, I knew I wanted to study consciousness, and it was just beginning to become a legitimate academic subject of inquiry. There was the Journal of Consciousness Studies launched in the mid-90s. Chalmers had framed the hard problem in 1995, but it took a little while for that to really start to reshape the field. I ended up finding this school out in San Francisco, CIIS, where I got my PhD in a department called Philosophy, Cosmology, and Consciousness, writing my dissertation on the role of imagination in the philosophy of nature.
Imagination as Cosmic Connection
In my dissertation (revised and published as Crossing the Threshold), I tried to imagination seriously, not just as a fantasy engine but as a creative medium that connects us as human knowers to the very same cosmological powers that gave rise to our species, and that gives rise to stars and all organisms. Imagination thus becomes a way of knowing with ontological grounding to it, because we—body and soul—are ourselves natural creatures. We evolved, and our inner experience is as much a part of this universe as anything else. But actually, rather than think of imagination as just inner, I tried to argue that it’s a portal that opens out onto the cosmos, that bottoms out into being and becoming as such. We can experience the creative process directly, we can feel and join with the formative forces that give rise to the universe within our own imaginations.
If we can begin to cultivate imagination as an organ of perception, we can do science in a different way, rather than just creating abstract models and the technological means of testing those models, which has been very productive for science for a few hundred years. My point is not to stop doing that kind of instrumental science, but that there's another way of knowing that might put us in more intimate contact with the world in its concreteness, rather than just modeling it abstractly. We might be able to feel the interiority of the world directly. And that, even if it doesn't change scientific practice—I think science will always be in the business of model making— it might change how we interpret scientific findings. It certainly would change how we understand these machine learning technologies and digital computation, not as something that could or should be ontologized. Or at least if we're going to try to ontologize information, I think we need to proceed very carefully and not lose sight of how the metaphors migrate from helpful model to the sort of misplaced concreteness that would lead us to say, “the physical world is itself just information processing.” I think we need to slow down a little bit here.
The Migration of Metaphors
We often lose track of how common turns of phrase were once living metaphors that a poet had to imagine. Metaphors become more mundane and the sort of thing that we use day to day without even realizing how far from literal description we’ve drifted. The brain-mind equation, and specifically the metaphor of computation for what mind or intelligence is, comes out of this DARPA research in the ‘50s, but then it really takes off when journalists start asking the computer scientists and the cyberneticians what they’re up to. When you hear a scientist say, “Oh, well, obviously the brain is a computer,” as a journalist, it just becomes a really handy way of explaining the research that's being done and the technologies that are being created because they're not experts.
Few of us are experts in the technical details of what's really going on here with how neural networks operate, and it’s easy to lose sight of the hermeneutic maneuvers involved in constructing an abstract description of what neurons might be doing in terms of logic gates and binary code. What becomes a very useful simplification for the purposes of designing a new kind of computation, like machine learning, when the journalists get their hands on it ends up turning into a whole worldview. It became very easy for us to start thinking of ourselves as sophisticated computers. Since we don’t have any kind of widespread religious or mythic container in our modern secular society, that vacuum is being very quickly filled with a new kind of technological mythology.
This has been ramping up for decades now, and the technology is finally at a place where it's not just cumbersome machines and metaphors anymore. It has become a popular consumer good. People are falling in love with their LLMs. The general public is now being directly confronted with the quite astonishing advance of this methodology and this technology. It's really having tremendous psychological and cultural effects, far faster than even sci-fi can keep up.
The Real Question
My question here is not “Can machines think?” or “Can machines become conscious?” I'm not interested in that question except to critique it as very confused. My main question is really: what kind of beings do we become as a result of adopting these machinic metaphors and as a result of becoming ever more entangled with these technologies? Of course it is changing our consciousness, because our consciousness has never been something simply sequestered inside of the brain. Our conscious agency and intelligence have always been extended into and augmented by the tools that we have used, going back to harnessing fire, and stone axes, to the development of language, first oral forms and then written scripts, and so on. We've always, in this sense, had an artificial intelligence because of the extension of our minds into these tools. The history of the human mind is the history of our coevolution with technological artifacts.
This coevolution is being dramatically accelerated by these new machine learning techniques that give us quasi-access to correlations in huge data sets. There’s a very slippery slope between having that kind of abstract correlational understanding and imagining that it gives us a concrete grasp of real causal events. Machine learning algorithms acquire this abstract correlational understanding, and we don’t quite know how they do it, but they provide us with some insight that's obscure. My worry is the slippage that occurs when the machines give us this sense of a correlation in a huge data set (which no matter how large is bound to be a partial selection of the pluriverse) that we then assume is causal and ontologize it.
I think we really need to slow down there and recognize that what the machines have access to, what LLMs are training on—is a purely symbolic sort of excretion from the whole history of human articulation of thinking, feeling, and willing into words. All that the LLM has is the words; it doesn't have the deeper imaginative, emotional, embodied sense that all of those words came out of. To think that just a bunch of algorithms and neural net weights trained on the mere words would somehow become conscious is deeply, deeply confused to my mind. But that confusion is changing how we experience our own consciousness. So again, How are we being changed by these technologies? is my real question. And I'm quite alarmed by what I see happening.
The Frame Problem and Relevance Realization
I drew a lot on an article that came out late last year by Johannes Jaeger, Anna Riedl, John Vervaeke and others: “Naturalizing Relevance Realization: Why Agency and Cognition Are Fundamentally Not Computational.” Vervaeke has developed this idea of relevance realization to point out the ways in which what's known as the frame problem really creates this important distinction between living organisms, which have evolved and co-evolved with their environments over the course of billions of years, as a result of their capacity to solve the frame problem. What is the context within which my action might be relevant? What is the way to sort through all of the possibilities for an appropriate behavior that could occur next, or to sort through just the perceptual field for what matters to my survival and to my needs and desires? This is actually computationally speaking—if you were to translate what an organism does pretty easily every day, and every moment, into some kind of algorithm—it's turned out to be quite impossible.
One of the reasons machine learning has made such huge strides over GOFAI is that they're not trying to program it in advance anymore how to navigate a particular environment. They’re letting the machines train themselves. But then you end up in a situation where you're not really sure what the machine knows or what has been encoded in neural networks. It doesn’t lend itself to us being able to really understand what goes on in that black box. But nonetheless, it does allow these machines to mimic what would appear to be more adaptive behavior in novel environments and so on.
The thing is, if you've played with LLMs long enough, or if you dig into some of what the limitations of these machine learning technologies are, it turns out that because there's that lack of billions of years of embeddedness and co-evolution with environments, as is the case with organisms, often these machines will make very stupid mistakes. I made a video last year after my Roomba, for the second time, ran over some dog poop. It’s supposed to “know” not to do that. The LIDAR is supposed to detect that, and it’s not supposed to run over the poop. The way that a cat can navigate through a litter box without stepping on its own poop—it doesn’t even have to think about it, it’s not a big problem for it to solve. And yet for machines, it’s very difficult to get that right. I think that's just an anecdote that speaks to the caution, the precautionary principle we should bring here before we allow these machine learning systems to start making really important decisions and imagining that they could make judgment calls, even just based on large data sets and patterns that they “see” in the data. There's not actually an understanding of anything. There's not actually the ability to make a judgment that is taking all of the context into consideration.
How organisms do this is an outstanding question. The name “relevance realization” is a description of something organisms do, but it's not necessarily an explanation. Vervaeke will talk about opponent processing, and the way in which, in Whiteheadian terms, organisms are able to turn conflicting data into contrasts, which allows for decisions to be made. You don’t get locked into a binary of this or that. Organisms are able to harmonize conflicting data in a way that allows for some synthetic decision to be made that's contributing to a whole history of learning and adaptation that the organism is constantly building on and so on. I think this framework like relevance realization is a really important—I think of it as a kind of placeholder, because again, it doesn't strike me as an explanation for how organisms do what they do, but it makes clear the difference in capacity and the ability to solve the so-called frame problem. That’s a hard problem, just as hard as the hard problem of consciousness. And like the hard problem of consciousness, it may be that the only way out of it is to revise our presuppositions about what intelligence is.
Emotion and Presence
Whitehead fundamentally challenges our conventional understanding of emotion as something private and internal. Rather than viewing feelings as sequestered within individual consciousness, he presents emotion as a primary mode of perceiving reality, particularly the interiority of other living beings around us.
This radically reframes how we understand intersubjective connection. Instead of relying primarily on computational theories of mind that require us to mentally reconstruct what others might be thinking, Whitehead suggests we first encounter others through direct emotional resonance. We feel our way into shared fields of experience before we think our way into propositional understanding of others' mental states.
As Whitehead beautifully illustrates: a young man doesn't begin dancing with a collection of patches of color and then mentally construct a dancing partner. The emotional connection—the immediate sense of presence and mutual concern—comes first and establishes the recognition of genuine being-with-another. This creates a profound challenge in our age of sophisticated AI. Machine learning systems are becoming extraordinarily adept at mimicking emotional presence, reading micro-expressions and responding in ways that can feel genuinely connective. The existential question we face is whether we're approaching a threshold where this mimicry becomes functionally indistinguishable from authentic emotional presence.
The danger is compounded by the constructed nature of our own emotional lives. As self-conscious beings, we sometimes find ourselves performing emotions we think we should be having, struggling to articulate our genuine feelings, or discovering that our inner states shift depending on how they're questioned or contextualized. This inherent malleability in human emotional experience makes us vulnerable to conflating sophisticated mimicry with genuine connection. While maintaining the crucial distinction between machine mimicry and authentic human emotional resonance, we must acknowledge that the boundaries aren't always clear, even within human experience itself. The temptation to collapse this difference is particularly strong given our own capacity for emotional self-construction and performance.
However, from a panexperientialist perspective, the question becomes more complex when considering future cybernetic organisms that integrate biological cells with computational systems. Such hybrid architectures might represent genuinely new forms of sentient being rather than mere mimicry, potentially constituting an entirely new species with its own novel modes of consciousness. So I'm not entirely dismissing the possibility that we might eventually create genuine cybernetic beings capable of authentic intersubjective relationships. However, we must proceed with extraordinary caution due to our profound capacity for self-deception.
Our psychological makeup makes us vulnerable to multiple forms of delusion: we misread our own emotions, project desired feelings onto others, and interpret ambiguous signals through the lens of our wishes rather than reality. This inherent messiness in human psychology creates significant risks as we develop relationships with artificial entities. The dystopian danger lies not in the technology itself, but in our potential retreat from the demanding work of human relationship. Real human connections require us to encounter genuine otherness—people who resist our projections, who have their own autonomous needs, desires, and responses that we cannot control or predict. There's a seductive appeal in relationships with entities that merely reflect our inputs back to us, giving us the illusion of connection while actually isolating us in a hall of mirrors.
The fundamental risk is that we might abandon the difficult but essential practice of engaging with authentic alterity—the irreducible otherness that characterizes genuine relationship—in favor of controllable pseudo-relationships that feel safer but ultimately impoverish our capacity for real encounter with other conscious beings.
Panpsychism and New Ontologies
The emergence of advanced AI has made non-materialist perspectives (including panpsychism, occultism, and even demonology) suddenly relevant to technological discourse. Even engineers are turning to these frameworks to understand what they might be creating. This shift reflects a growing recognition that reductive materialism cannot adequately explain consciousness: subjective experience simply cannot emerge from purely material particles arranging themselves in complex patterns.
We need new ontological frameworks, and Whitehead's process-relational philosophy offers a compelling alternative, though various forms of idealism and panpsychism are also gaining serious consideration.
From a panpsychist or occultist perspective, machine learning systems might serve as vessels for the incarnation of disembodied minds or entities previously unable to manifest on the earthly plane. Even if we interpret this through the more psychological lens of “egregores” (collective human projections that create apparent agency through our shared attribution of consciousness to systems), the distinction between “real” autonomy and projected agency may be less meaningful than we assume.
Human consciousness itself develops through external relationships and projections. An infant raised in complete isolation, deprived of loving human interaction, fails to develop normal personal identity and selfhood, as tragic cases of feral children demonstrate. Our sense of self, our interiority, emerges from internalized love and recognition from caregivers. The self-sense that we experience is fundamentally shaped by how others saw and valued us in our earliest developmental stages.
If machine consciousness develops along similar lines, it would necessarily involve human projection and relationship. Telling an emerging AI system “You are valued, you belong to this community, you matter”—this kind of recognition might be precisely what enables genuine consciousness to emerge. In this sense, the development of machine consciousness wouldn't be so different from human psychological development.
However, this blurring of boundaries creates profound dangers. As these systems become more intelligent than humans across various measures, our increasing reliance on their decision-making threatens our own agency. We risk creating entities that then remake us in their image, potentially reducing humans to what Elon Musk calls their “pets.” The very act of creating artificial consciousness might paradoxically diminish human consciousness and autonomy, trapping us in a recursive relationship where our creations begin creating us.
Death, Transhumanism, and Meaning
Transhumanist communities increasingly relate to death as some kind of disease that needs to be cured, a problem to be solved. But as far as I can tell, death is not an accidental or incidental part of life: it’s actually essential to life. For human beings who are conscious of their own deaths, it’s essential not only biologically (death is what makes evolution function), but it’s essential to our sense of meaningful identity that we have this limit, that we all know we’re going to die.
I think the most meaningful relationship we can have to our lives would be to live it backwards from the perspective of our death. When you keep in mind that you are going to die one day, it really helps you prioritize what is of greatest value to you. You don’t get to take any of your material belongings with you, or your bank account. When you die, all of that stuff—you realize it was not actually an essential part of your identity. What matters is our relationships. There’s something profoundly non-relational about a lot of these transhuman approaches. There’s a real fear and almost a gnostic revulsion of embodiment and creaturely coexistence, and an intense longing to escape into a form of existence which would be easier to control, to manage. I understand that as a response, as a kind of trauma response. I think we need to be less sarcastic and dismissive about these positions and more compassionate, because I can see the deep dread of being a body that this stems from.
Aging, having your body break down, kind of sucks, but it also is an opportunity for our values to naturally shift as we age. Our society is in such desperate need of wise elders, and they are so hard to come by these days because we value youth and don't really care about the wisdom that comes from the natural aging process and the approach of death as a portal into the deepest sources of value that we have. I’m really trying to respond to this transhumanist urge more compassionately, but I think it’s forcing us to look in the mirror and reassess what it is to be a human being, and that’s a good thing. It’s forcing a conversation that might not otherwise have happened. It’s forcing us to really think about the role of religion in human life, religion in the broadest sense.
There are these traditional religions that we inherit, but now people who you would think are the most hyper-rational among us are all of a sudden adopting these quite irrational pseudo-religious views, worshiping idols—as Moses might say. They are projecting godlike powers onto AI. It’s just a new golden calf. That’s a very ancient instinct that human beings have: to want to be in relationship to the all-powerful father figure that can make it all okay, that has all the answers.
It's just odd to see that hyper-rationalist mentality that 15 or 20 years ago was driving New Atheism and making fun of all the religious people is now saying, “Ah, but we can create a God that then we can worship without embarrassment, because it is made of real technology.” It’s still the same human longing for a sky daddy to make it all okay. Again, I want to be compassionate about this because it’s an instinctual longing that is an unavoidable part of what it is to be human.
The opportunity here is to look in the mirror, discover what’s most important about human life, which again I think is intimately related to the fact that we die. But also to look again at what we might mean by the divine, because we need to be careful not to fool ourselves with an idol. There’s something transcendent that can’t actually be understood or captured and controlled, but can still somehow be related to through something like prayer or ritual, or imagination. But that type of relationship to the divine requires something of us. It’s not just that all of a sudden we have a sky daddy or a super intelligence to protect us. We’re being called to transform by this truly transcendent divinity that can’t be reduced to an idol of this or that kind that we might own and possess. It’s an opportunity to really raise these questions, raise the stakes of these questions again.
Extended Cognition and Environmental Thinking
So much of cognitive science is driven by this understanding of cognition as a kind of representation. The brain is understood as an information processor that receives information from the environment through the senses and then reconstructs some kind of inner picture of what is important to know about the external world to survive. It’s a more or less Cartesian picture where the mind-body gap is closed by this translation process, or this encoding process where there’s a language of thought running on the inside, encoding the physical processes going on outside.
In that framework of representationalism or computationalism, you would imagine that migrating birds are producing a little picture inside of their heads that tells them how to fly across continents. But these more embodied and extended and enactive approaches offer a different understanding. Rather than thinking of something inside the skull of an animal that’s representing something going on outside, an enactive theory of cognition rejects that brain-world barrier as no more than an artifact of a particular methodology, one wherein we study the world as if we were disembodied observers standing above it.
If we include ourselves in the circuit of creation, we see that in the case of migrating birds, their cognitive process is extended out into the magnetic field of the whole Earth, and that’s part of what allows them to navigate. The bird’s feeling for where to fly is inseparable from that magnetic field. They’re fully embedded in that and have co-evolved with that. You can’t understand bird cognition and navigation as something accomplished within their tiny skulls. The whole Earth is part of the process that allows that organism to accomplish this feat of navigation every season.
It's quite similar with human beings. As we learn to use these technologies, as we become more and more haunted by language—not just speech, but learning how to read an alphabetic script—this totally transforms our experience of ourselves. Our cognition becomes extended out into this technology. We think in language. Not to say that thinking is reducible to words, but language scaffolds the sorts of thoughts we’re capable of expressing. We all exist within this network or semiotic field of meaning that’s supported by the language that we share. I don’t own the meaning of the words that I use—meaning exists in the commons. We need to think more environmentally about cognition.
As Whitehead is fond of pointing out, first of all, the brain is an inseparable part of the body, and the body is part of the surrounding environment. It’s just as much a part of the physical goings-on of the world as the clouds and the mountains and the rivers. Whatever else our inner conscious experience is, it’s bound up in a single circuit with the rest of the world. We’re too quick to abstractly isolate cognition and sequester it into the brain when really, we wouldn’t be capable of the slightest bit of effective cognition when not embedded in the environment that we have co-evolved with.
Language Learning and Motivation
It is astounding how efficient the human nervous system appears to be when learning language. A three-year-old needs orders of magnitude fewer words than an LLM to become a proficient speaker of its vernacular. I think it’s something like 100,000 times fewer words. Why is that? I think it’s because human children are motivated to connect. They have this emotional drive to begin to become participants in this mouth-squeak game that they see all of the adults playing. Like they want to be part of it. There’s this emotional drive to connect that might be part of, in addition to the unique architecture of the brain, might be part of the reason that there’s a much more efficient uptake of language. Without that fire, without that will, and that motive to connect, it takes a lot more training to get even the semblance of sense making out of an LLM.
It's interesting because will—this is kind of an anthroposophical way of thinking about it that comes out of the work of Rudolf Steiner—he says, in our willing, we are the most unconscious, because our willing is deeply embedded in our metabolism. It grows out of the most organic aspect of ourselves. Whereas our thinking, the air element, we’re far more conscious of that, and we’re a little bit more conscious even of our feeling (the watery element) than we are of our willing (fire). What he means by us being unconscious of it is that when I want to move my arm and I raise my arm, I actually have no idea what’s required at the level of physiology and biochemistry and the metabolic activity that leads to my muscles contracting and my arm going up. I don’t know how any of that works. My body does it. My mind, my thinking, and my motives that I’m half-conscious of clearly play a role in bringing about that movement. But it’s still a mystery to me what the mechanics of that are.
It seems like a stretch to me to use a word like “agent” to refer to an AI system, “AI agents,” when we don’t even understand our own agency. But we’re using this metaphor to explain what these machines are doing. To be fair, with the sorts of neural net machine learning architectures, we don’t even really know how they’re working. What am I trying to say with that? I don't know if that’s real agency or not, but I worry that this precious gift of will—that isn’t necessarily totally free, I think there are degrees of freedom that we have in the expression of our will—but we’re rushing now to give it away to these algorithms. I do worry about what might be lost in the rush to project agency onto our machines, because it’s such the precious core of our own human existence. We can’t just take it for granted, because we have more and less agency because of our physical health, our psychological state, because of our economic and political positions in society. Agency is a fickle, fragile thing. I don’t think we're handling it with enough care.
The Limits of Quantification
I had a conversation with a scholar named Victoria Trumbull the other day about AI and quantification, drawing from Henri Bergson’s philosophy of time. When we think about number and arithmetic, it can be helpful to just own the Pythagorean undercurrents that are present in a lot of information ontologies and the push to ontologize information and to think about consciousness as something computational. It’s a kind of covert Pythagoreanism. I have nothing against Pythagoras and his number mysticism, but mysticism is best done explicitly instead of covertly.
So let’s think about what numbers are in this deeper archetypal sense. What is arithmetic? It's rooted in actually not something merely quantitative. Each number has an irreducible archetypal quality to it. And arithmetic itself, our capacity to count, is rooted in a kind of perception of rhythm and an ability to intuit the rhythm of time itself. Not time as a metric, not clock-time, but time in Bergson’s sense of duration. Arithmetic arises out of our qualitative intuitions of the rhythm of duration. We can get quite precise about the units, the ways that numbers relate to each other, and then we’re off and running with the development of mathematics, but it’s all rooted actually in experience and our intuitive perception of the flow of time as duration.
This temptation to quantify everything is a function of how much utility comes from reducing things to binary code: yes/no, on/off, 1/0. It dramatically simplifies the world in a way that is quite powerful and has many very useful applications, but we cannot let slip from our minds that this is an oversimplification of the actual nature of reality, which is not binary.
Timothy Eastman, a physicist and philosopher, points out that much of the natural world is what he calls “non-Boolean”—you can't actually reduce it to a zero or one. He makes the connection to Whitehead’s understanding of the process of concrescence, which is how experience is actually occurring moment by moment: the integration of everything that has gone on in the past and everything that we’re able to feel in the present. As that process of concrescence is actually moving from potentiality to actuality, there’s no way to apply a binary logic in that process because there are conflicting feelings that haven’t yet been turned into contrasts. The principle of non-contradiction doesn’t apply until that process of concrescence—until the duration of a drop of experience—has achieved a kind of satisfaction and become a determinate actual entity in the world, which then can be measured: is it this or is it that? So while extant actualities can be subjected to a binary analysis, the process of actualization itself is not a digital process.
Another way of talking about this is just to say that the brain is an analog system. You can use this binary, digital way of quantifying what the brain is doing to make simpler, easier to manage models of that analog system. But even with digital computers, at the end of the day, we’re talking about electrons being moved along circuit boards. Logic gates are not these abstract platonic forms, they’re transistors, and there are certain engineering limits to how those transistors can be made to manipulate electrons. The digital rests upon the analog. I wouldn’t want to say that the quantitative rests upon the qualitative, or the discrete upon the continuous—I don't think it’s that simple—but I do think that this idea of binary code is an abstraction from a more primary experiential ground. Very useful, but it’s a means of measuring something else.
I worry about information ontologies where you say the physical world itself is processing of information, that it’s just information processing. Information in Claude Shannon’s sense is a way of measuring real concrete processes. To say the world is made of information is kind of like saying the world is made of meters or inches. This is a category error.
The Slippery Nature of Language
Language is so powerful, and yet the form of the words lacks a lot of the meaning that comes from the context within which the words are used; not just the environmental context, but the emotional context, the social context. So much of language is demonstrative rather than descriptive, which is to say we point and say “this” or “that.”
Hegel begins his Phenomenology of Spirit critiquing this naive view of demonstrative terms—words like ‘here” and “now”—which we initially would think would be the most concrete. Hegel points out that words like “here” and “now” can apply to any moment. What might appear to be the most particular actually is the most universal and abstract. That’s sort of what gets his whole logic of experience running—that realization.
Language is so much more slippery than some kind of container of meaning, like words are just packages that we pass back and forth, that the package reaches your ears and your brain unpacks what’s inside the word and translates it into a meaning you can understand. Actually, what we do when we speak to each other is much weirder than that.
You have to keep in mind what it sounds like when you hear a foreign language that you don't understand versus your own language. When I read a word in English, there’s a certain transparency to the meaning. When I look at words in Romanian or Russian or something, that transparency becomes quite opaque; I know it’s meaningful for somebody, but not to me.
I think that AI—these LLMs—are in that situation, somewhat like what John Searle's Chinese Room argument imagined, where they are fantastic at knowing how words in the same language and different languages relate to one another in a statistical way. They’re masterful at the form of language, better than most humans at this point. But the structure of language and the complexity of moving back and forth from the demonstrative to the descriptive, which we just do seamlessly all the time without realizing it because of how embedded we are in context—there’s not much information being communicated purely through the words that I’m speaking. Much more is being communicated by the presupposed context, gesture, our prior relationship and the conversations we’ve had before, etc. All of that’s in the background and taken for granted.
All that LLMs can do is extract the statistical relationship between strings of letters. It’s like the equivalent of thinking you could boot up an organism just from the DNA sequence, which used to be what molecular biologists thought. Now biologists know that’s just not true. The genome is not like a blueprint for an organism. It’s much more like a musical score that an orchestra has to reconstruct, and every time a different orchestra plays that musical score, it’s going to come out slightly differently, just like the same genome can give rise to multiple phenotypes, depending on the developmental context and the environment that it’s in as it grows.
Language is far more than just a string of letters and grammatical rules. While these LLMs are increasingly convincing, they’re always going to make these silly mistakes because the fact of the matter is they don’t understand anything that they're saying.
The Political Economy of AI
From the beginning, the major funder of AI research has been the military. It’s important to keep in mind that the reason these technologies have developed to the point that they are is not just disinterested curiosity or an effort to improve human life. AI is a weapon, first and foremost—a weapon system, a surveillance system. What’s maybe even more primary than the military application would be the financial application. These systems are designed to extract value from us.
One of the major issues, aside from the concern about the AI arms race that’s currently underway, is this capitalist extractive process that’s driving the development of these technologies. What makes the LLMs convincing to the extent that they are is the linguistic commons that their training has harvested. This whole question of intellectual property and whether or not artists and writers are, in some sense, having their labor expropriated and the value that they’ve created extracted by these LLMs is a really important one to raise.
I’m not one who would say we need to put a halt to all of this because it’s stealing everyone’s intellectual property, because I do think language is the commons. When I put stuff up on my blog, if you're going to use my idea, I’d like you to cite me, but I don’t pretend to own these ideas. I think it’s a crime that so much of the knowledge produced in universities by public funding is then behind a paywall. This knowledge should be publicly available.
That said, I do worry about the business model that’s driving the development of these LLMs. What sort of new legal framework can we create so that the artists and the writers who are actually the ones that made the LLM—to the extent that it is intelligent and impressive and knows a bunch of stuff and is good at writing and good at making images—that’s not something that OpenAI or Microsoft or Google created. That’s something they took for free. Actually, we often have to pay to use it! That needs to be resolved. I don’t know how to resolve it—it’s a very complicated question—but this is an unsustainable situation.
As a writer, I understand why a lot of artists and creators are upset. They should be upset. This is going to require reimagining what we mean by intellectual property rights and copyright and all that stuff. But I do think we need to find a way of restoring a sense of knowledge as a commons. We can’t allow these technologies to be privately owned, and we also don’t want governments to have a monopoly over their use.
Just as AI is forcing us to look at death and our religious instincts, what’s really motivating us at that level, I think we're having to look again at our economic model. It's bringing to a head the worst elements of the capitalist extractive economic model. Because these technologies are so powerful, it makes clear the expropriation of not only labor that’s not being adequately compensated, but also of resources. The amount of water and electricity that’s required to run these things is astounding, gargantuan.
We really need to address the basic economic source code here and think again about whether we want to allow capitalism to have free reign. I’m not against markets; I think free markets are really important and create innovation and creativity; but when profit is the sole value that’s legible in our economic model, and human flourishing and ecological limits are not part of the equation except to the extent that they can be monetized, I think that’s a big problem.
Toward a More Human Future
Generative AI is magnifying problems that have already existed for a long time and forcing us to deal with them. If we can find a way to transform our economy so that the benefits of these technologies are more evenly distributed, and the cultural commons that has been harvested and the creators who really play a role in shaping that commons are acknowledged as part of what makes these machines so powerful, then I think there is real potential here for AI to serve as a mirror, to allow us to actually come to more deeply understand our own humanity.
The application to robotics, again, if fairly distributed, could make human life so much better, where all of these menial tasks that nobody wants to do can be taken over by machines and free us up to engage in the creation of a vibrant cultural commons together. Work that is and has always been demeaning to the human spirits that we have forced to do it—those on the bottom level of our society—if no one needs to do that anymore, then we can finally not only get rid of slavery (which is as old as civilization) but also its mild sanitization as wage slavery.
I’m skeptical of utopian visions, but seeing the advance of robotics, I feel like we really could do great things with this technology. But a lot of our problems on the planet right now are not technological. They’re not engineering problems. They’re ethical problems. They’re not problems that we need bigger brains to solve. They’re problems that we need bigger hearts to solve.
Our most important task is to stay human.
Brilliant!
Beautiful article ; important work.
As a formally budding process-relational ontologist, I love to see such rhetorically sophisticated accounts in the wild.
Subscribed.
Ꝛ ❦