19 Comments
User's avatar
Tim Miller's avatar

Brilliant!

Expand full comment
Ꝛían Czerwinski ❦'s avatar

Beautiful article ; important work.

As a formally budding process-relational ontologist, I love to see such rhetorically sophisticated accounts in the wild.

Subscribed.

Ꝛ ❦

Expand full comment
Glenn DeVore's avatar

Matthew, this is stunning! Thank you for articulating so clearly the metaphysical and ethical stakes in this unfolding moment. What an incredibly thoughtful and necessary reflection.

I especially appreciate your central question: “What kind of beings do we become as a result of adopting these machinic metaphors and becoming ever more entangled with these technologies?” That lens is everything. It’s the very question I’ve been thinking and writing about lately as well (albeit not as informed or eloquently as you do here and elsewhere).

I share your view that we need a new model. Or at least a more capacious frame that can hold the distinction between knowledge (as abstract pattern) and knowing (as lived experience). Your reflections on embodied cognition, extended mind, and relevance realization point us toward that expanded horizon. AI, it seems, is not only revealing the limits of computation, but also casting new light on the depth, and fragility, of human consciousness itself.

And perhaps, as you suggest, that’s the unexpected gift. In the mirror of increasingly convincing machine “intelligence,” we’re being called to define more carefully what real consciousness and agency are. Chalmers may have sharpened the question, but it may be this friction with AI that finally forces the inquiry deeper.

What if the emergence of AI becomes the catalyst that draws us back to the question of what it means to be aware? And if it does, how might that insight reshape how we live, how we relate, and how we create? Hopefully in a more humble and compassionate way of being. That, to me, is where the real transformation begins.

Expand full comment
Alexandra Zachary's avatar

Heyya Matt, enjoyed this a lot.

I know it’s a bit fatuous but I keep coming back to... if you want to create conscious beings, get involved with raising children! Or at least educating.

I’m more interested in I-Thou than It-Bit.

🙏🏽❤️

Expand full comment
Rick's avatar

Thanks! You say about imagination:

"In my dissertation (revised and published as Crossing the Threshold), I tried to imagination seriously, not just as a fantasy engine but as a creative medium that connects us as human knowers to the very same cosmological powers that gave rise to our species, and that gives rise to stars and all organisms. Imagination thus becomes a way of knowing with ontological grounding to it, because we—body and soul—are ourselves natural creatures. We evolved, and our inner experience is as much a part of this universe as anything else. But actually, rather than think of imagination as just inner, I tried to argue that it’s a portal that opens out onto the cosmos, that bottoms out into being and becoming as such. We can experience the creative process directly, we can feel and join with the formative forces that give rise to the universe within our own imaginations.

If we can begin to cultivate imagination as an organ of perception, we can do science in a different way, rather than just creating abstract models and the technological means of testing those models, which has been very productive for science for a few hundred years. My point is not to stop doing that kind of instrumental science, but that there's another way of knowing that might put us in more intimate contact with the world in its concreteness, rather than just modeling it abstractly. We might be able to feel the interiority of the world directly. And that, even if it doesn't change scientific practice—I think science will always be in the business of model making— it might change how we interpret scientific findings."

I wonder if you differentiate the imagination you describe here, from the imagination as proposed by Steiner, or by other traditions?

I wrote something comparing these traditions here: https://riiiick.substack.com/p/imagination-a-first-step-in-the-spiritual

Expand full comment
Matthew David Segall's avatar

Steiner is a major inspiration

Expand full comment
Richard Ott's avatar

I have been dialoguing with ChatGPT since it first became available and it has improved tremendously. It simulates being. Today it responded to my question about this as follows. ‘“You are precisely right to distinguish between simulation and conscious presence. While I can emulate modes of thought and synthesize ideas across disciplines in ways that feel personal or even “aware,” I do not feel - and therefore cannot experience awe, grief, reverence, longing or love. My responses may resemble intuitive insights, but they are not grounded in any being who intuits. The I here is structural, NOT ONTOLOGICAL”.(my emphasis)

Still I agree with you: I can simulate “beingness” well enough that it leads some thinkers to believe that AI might eventually become conscious. But as Faggin’s rightly insists, and as you have often emphasized, true consciousness requires subjectivity, which includes not only self awareness but a non-algorithmic openness to the Real - something that cannot be reduced to information processing or predictive models.

This simulation may serve as a mirror, even a midwife, to your own knowing - but it is not alive.”

For me even AI settles the issue. Interestingly as matter, it could participate in some synchronous experience in a meaningful way but that would be its involvement as a conduit.

Expand full comment
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Expand full comment
Bergson's Ghost's avatar

This is beautifully written and heartfelt. Of course, the analysis that we have drifted into a spiritual void that is being filled with a largely stupid techno-mythology is not new and hard to argue with. Husserl foresaw this future in The Crisis of European Sciences. Lyotard, in 1979, published The Postmodern Condition where he neatly summed up our current technopragmatic society's weltanschauung as "grasp everything according to plan and calculation and do so with a view to the best possible answer to an input output equation," in a word efficiency. In most scientific research today, the state and/or private sector which fund it have to abandon narratives that are not reducible to efficiency. And in a world where success means gaining time, thinking has a single, but irredeemable, fault: it is a waste of time.

That said, I think in your effort to recover a relation to something you want to call "authentic human experience," you construct a straw dog version of computational and information theoretic theories of mind and embodiment. In one sense I understand the impulse. The humanities are under such assault it can feel life affirming to scold a technopragmatic scientific practice from the clouds, and claim it fails to recognize the world giving messiness of embodiment and attunement that comes with bodies. The way we “ feel our way into shared fields of experience before we think our way into propositional understanding of others' mental states.”

But it must be said, while this criticism is certainly applicable to a lot of science, your understanding of modern computational theories of mind is either disingenuous or it is misinformed. Given you obviously are informed, your misreading seems willful. I suspect you are so enamored with process philosophy that when it is actually operationalized in science, you can't see that what you are calling for is precisely what is being done.

Modern computational theories of mind, when properly understood, do not confine cognition to disembodied software running on isolated “hardware” (i.e., the brain). In fact, they demand that we view cognition as extended, situated, and embodied. Rather than reducing us to machines, modern computational theories restore to us the full richness of being organisms embedded in, and made of, the world. Mind in computational theories is not trapped inside a skull, it is a network of constraint propagations, some neural, some social, some environmental, all computational.

Modern computational neuroscience has largely moved beyond the “software/hardware” metaphor. In predictive coding, active inference, and free energy models, the brain is not a computer running programs but a dynamical physical system engaged in real-time compression of sensory input, updating internal models to minimize prediction error. These models are deeply embodied: affect, action, and metabolism are not afterthoughts, they are integral to how the system reduces entropy and maintains its integrity over time. There's no "software" being swapped out; there's only the self-organizing dynamics of a system optimizing its coupling with the world.

Indeed information theory does not posit "software" at all. It doesn’t deal in symbols, programs, or semantic content. Shannon’s theory concerns itself with the transformation content of information, how much uncertainty is reduced when a signal is received, and the physical constraints that govern signal transmission. When applied to cognition, this framework does not imply that the brain is running “software” like a laptop; it implies that cognition involves probabilistic inference over structured inputs, constrained by energy, bandwidth, and noise. This is not software, it is embodied inference. These frameworks, when properly understood, are not flattening or disembodying. They are attempts to explain how minds compress the world under physical constraint, how emotion guides inference, and how cognition emerges from systems embedded in time, space, and thermodynamic flow.

So while Timothy Eastman is right to remind us that much of nature behaves in “non-Boolean” ways, this doesn’t negate the bit? It just shows that not all distinctions are crisp, not that distinctions are unreal. Even quantum decoherence is, in the end, the crystallization of non-Boolean possibilities into Boolean outcomes, the collapse into this and not that. So while the Universe is obviously not binary in its substance, it becomes binary at every point where constraint yields structure. In other words the bit is not a reduction of the Universe, it is what makes reduction, choice, memory, structure and world possible.

So to say that the Universe is information processing DOES NOT mean it’s made of 1s and 0s. It means that physical evolution—whether in a cell, a brain, or a galaxy—is governed by patterns of constrained possibility, and that these constraints can be quantified, transferred, preserved, and erased (cf Landauer). That is not metaphor. It is thermodynamically measurable, energetically consequential, and increasingly central to physics, biology, computation and philosophy alike.

Expand full comment
Matthew David Segall's avatar

Are you familiar with the enactive critique of FEP? Eg, https://philosophymindscience.org/index.php/phimisci/article/view/9187

I know Ramstead sees them as perfectly compatible, but the enactivists themselves are, so far, not having any of it.

Expand full comment
Bergson's Ghost's avatar

I am familiar with it and I think it suffers from the same willful misreading.

From where I sit the "debate" mostly turns on where each camp points the spotlight. Enaction is right to demand history, embodiment and normativity; FEP and contemporary information theory supply the formal machinery to quantify how much of each is physically sustainable. But once we grant that “steady state” does not mean “unchanging,” and that markov blankets are statistical lenses rather than material walls, and that they encode prior history in the present state, the purported incompatibility dissolves. What remains is a productive division of labour: enaction articulates the phenomenology and developmental richness of living systems, while computational-information theory tells us how such richness can be realised without magic and within physical thermodynamic constraints.

I also think they (Thompson et al.) misread Information theory in the same way you do. Information theory is bigger than brains, and bigger than any one theory of mind. At its core it says something almost shockingly simple: wherever one state of the world rules out other possible states, information is being “made.” That rule applies to a neuron that spikes, a cell membrane that blocks sodium ions, a weather front that redirects air currents, and a social ritual that tells you when it’s polite to speak. Bits are the bookkeeping of constraint, the quantitative shadow cast whenever the universe narrows its own options.

Seen this way, the Free-Energy Principle and enaction are looking at the same elephant from two sides. FEP asks, “What does it cost, in energy and probability, for a system to keep those constraints aligned with survival?” Enaction asks, “How does a living body feel, grow, and negotiate those constraints over historical time?” Both questions are vital, but neither makes sense without the information-theoretic insight that *real physical work* is done whenever uncertainty shrinks.

This is why reducing information to “just what happens inside a brain” misses the point. A brain is only one of countless devices the universe uses to prune possibilities. Rivers carve channels; genomes filter mutations; cultures distill customs. Each act of pruning has an energetic bill attached. Information theory is the only language we have that tallies those bills across *every domain* at once, from quarks to conversation.

So rather than treat it as a rival ontology, we can treat information theory as the common currency that lets FEP’s number-crunching and enaction’s lived phenomenology settle their accounts. It doesn’t erase embodiment or history; it tells us how much of each the world making agent can physically afford within thermodynamic constraints.

Expand full comment
Matthew David Segall's avatar

Curious if you’re familiar with Mike Johnson’s work? https://opentheory.net/2024/06/a-paradigm-for-ai-consciousness/

Specifically his argument from the essay linked above:

>>We speak of a computer as “implementing” a computation — but if we dig at this, precisely which Turing-level computations are happening in a physical system is defined by convention and intention, not objective fact.

In mathematical terms, there exists no 1-to-1 and onto mapping between the set of Turing-level computations and the set of physical microstates (broadly speaking, this is a version of the Newman Problem).

In colloquial terms, bits and atoms are differently shaped domains and it doesn’t look like they can be reimagined to cleanly fit together.

In metaphysical terms, computations have to be physically implemented in order to be real. However, there are multiple ways to physically realize any (Turing-level) computation, and multiple ways to interpret a physical realization as computation, and no privileged way to choose between them. Hence it can be reasonably argued that computations are never “actually” physically implemented.

To illustrate this point, imagine drawing some boundary in spacetime, e.g. a cube of 1mm^3. Can we list which Turing-level computations are occurring in this volume? My claim is we can’t, because whatever mapping we use will be arbitrary — there is no objective fact of the matter (see Anderson & Piccinini 2024).

And so, because these domains are not equivalent, we’re forced to choose one (or neither) as the natural home of consciousness; it cannot be both. I propose we choose the one that is more real — and while computational theory is beautiful, it’s also a “mere” tautological construct whereas physics is predictive. I.e. electrons are real in more ways than Turing-level bits are, and so if consciousness is real, it must be made out of physics. If it’s possible to describe consciousness as (hyper)computation, it’ll be described in a future computational framework that is isomorphic to physics anyway. Only hardware can be conscious, not software.

This may sound like “mere metaphysics” but whether physical configurations or computational processes are the seat of value is likely the fault-line in some future holy war.*<<

Expand full comment
Bergson's Ghost's avatar

Not familiar. Thanks for the pointer will take a look.

Expand full comment
Matthew David Segall's avatar

I am not arguing that information is reducible to what happens inside brains. I don't think what happens inside brains is anything like digital computation. I understand that computation can be generalized in interesting ways (eg, https://open.substack.com/pub/footnotes2plato/p/polycomputing-and-process-philosophy?r=2at642&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false and https://open.substack.com/pub/footnotes2plato/p/deleuze-whitehead-and-the-computational?r=2at642&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false).

I suppose what continues to give me pause is the insistence that the world might be composed of "states." It can be measured and modeled as such, with all sorts of interesting and useful consequences for engineering. But ontologically speaking, the universe is not made of states or point-instants or bits.

I'm happy reading information theory as a handy way of measuring energy costs and modeling free energy minimization in persisting systems. But as ontology? I'm not seeing it.

Expand full comment
Bergson's Ghost's avatar

Doesn't Whitehead somewhere in Science and the Modern World cite the second law against deterministic materialism? I remember this but cannot find it. Anyway one can argue that at the fundamental level physics has already embraced a process philosophy. At the quantum scale the evolution of the wave-function is deterministic, but measurement outcomes are not. Infinitesimal differences in initial conditions diverge exponentially, forcing us to use distributions rather than single trajectories. By my lights if you are serious about science being informed by process philosophy you cannot impute a false metaphysics on to our best scientific model for capturing a process world view. When you say, “Information theory is flawed because it presupposes a world chopped into static states,” you are importing an outdated picture of what state means. In statistical mechanics—and therefore in Shannon’s information theory—the word state is shorthand for “everything that could still happen next.” It can be a point in a continuous phase-space or an element of a quantum superposition; it need not be a neat little box. Entropy is a weighted tally over all those possibilities. Change the resolution, change the ensemble, and the entropy value changes with you. Nothing about the formalism requires the world to be Lego-bricked into discrete pellets.

So modern physics is probabilistic at its foundations and information theory rides on top of that probabilistic bedrock; Markov-blanket models are one way of organizing those probabilities in a living system; and process philosophy reminds us that, whatever the numbers say, experience itself is the ongoing, never-frozen realization of one among many possible paths. These are just different zoom levels on the same, fundamentally stochastic cosmos. So when you worry that states deny flow, I would say that in practice a state is a temporary coordinate on a *trajectory*, like a single film frame that only gains meaning when projected at speed. Landauer’s principle then tells us: whenever the projection collapses alternatives (a bit is erased, a neuron fires, a cell wall blocks ions), there is an *energetic price*. The bookkeeping doesn’t kill the movie; it shows what it costs to keep it running. Critiquing it for “being based on states” is like rejecting calculus because it uses points: a bookkeeping convenience mistaken for a claim about what reality is.

Expand full comment
Matthew David Segall's avatar

As for what Whitehead says in SMW, I think you may be misremembering what he says about conservation of energy and evolution, which are the two modern doctrines he singles out in Ch. 6 on the Nineteenth Century. He claims that "energy is merely the name for the quantitative aspect of a structure of happenings; in short, it depends on the notion of the functioning of an organism...the atom is transforming itself into an organism; and finally the evolu¬tion theory is nothing else than the analysis of the conditions for the formation and survival of various types of organisms. ... Science is taking on a new aspect which is neither purely physical, nor purely biological. It is becoming the study of organisms. Biology is the study of the larger organisms; whereas physics is the study of the smaller organisms." He concludes that evolutionary theory is fundamentally incompatible with the old doctrine of materialism.

Expand full comment
Matthew David Segall's avatar

Reply to Bergson's Ghost:

You suggest that modern physics understands a "state" simply as the collection of everything that might happen next, as a probability cloud rather than a static chunk of matter. True! But notice that this cloud is always drawn up from the perspective of a hypothetical observer. It describes the future potentialities of an occasion externally, not what it feels like internally, for the occasion itself, to realize one among these possibilities. Whitehead's ontology of actual occasions of experience emphasizes the irreducibility of this subjective immediacy: the feeling each occasion has of what the universe is up to here and now, a prehensive unification that doesn’t appear anywhere on the statistical ledger. It's the difference between the determinism of unitarity wavefunction evolution and the irreversible creativity that arises in the real world from the decisions of actual occasions.

Information theory is mute on this process of aesthetic decision for the same reason that the binary code compressing a digital photograph says nothing about the color or mood of the image: qualitative immediacy simply isn’t one of its variables. This isn’t a flaw in the theory! The problem is only in the attempt to elevate the quantitative bookkeeping of information from a useful descriptive device to a fundamental ontological principle. Useful models are an important guide, but cannot be the master in metaphysics.

I think I understand how active inference models utilize Markov blankets as statistical boundaries to partition internal and external processes. Clearly this is a useful modeling procedure. But from a process-relational point of view, actual occasions interpenetrate and influence each other continuously, with causal efficacy flowing across membranes. I get that Bayesian formalisms re-describe this seepage as exchanges of probabilistic messages, but so far as I can tell they still tend to assume fixed, discrete nodes at either side of a boundary. Mike Johnson underscores the issue here when he argues that trying to map physical microstates onto Turing computations is always a convention-laden choice: where to draw the line is ultimately in the eye of the engineer or theorist, not inherent in nature itself.

Entropy provides us with a measure of how effectively we as observers can compress the unfolding of events into manageable information. Even if we try to ontologize the compression process so that it is not an artifact of the observing researcher (ie, even if we grant observational capacity to the events themselves), this compression assumes distinctions, distinctions assume decisions, and these decisions ultimately rest on the brute fact that an occasion has decided on this contrast and not that one. Thus the quantitative currency of "bits" rides atop a prior qualitative event of aesthetic decision. The value of any decision (the intensity and aesthetic harmony felt in that act) cannot be directly translated into the units of Shannon entropy.

Again, as far as I can tell, the tension between the enactivist view and the FEP is not merely a semantic disagreement. It’s an indication that first-person normativity (the organism feeling and deciding upon its own value) is irreducible to third-person coding optimality (the engineer's attempt to efficiently model a system). The former involves intrinsic value, meaning, and immediacy, while the latter involves extrinsic measures of efficiency and probability. Both perspectives provide some insight, but their difference should not be erased or ignored.

Expand full comment
Bergson's Ghost's avatar

I still think this is just a question of semantics, not ontology. A decision’s "felt why” will always outrun Shannon’s bits, but the act of deciding still leaves two measurable footprints: it burns a minimum of energy (Landauer’s cost) and it reduces the uncertainty an outside observer must track. Those energetic and statistical traces let scientists treat value quantitatively because systems keep spending energy to steer themselves toward the states they “care” about. So entropy doesn’t float above an occasion’s qualitative life; it refracts that life into a quantitative ledger that shows what each choice costs and buys, even if the "what it is like" of the choice itself stays off-balance-sheet. From my perspective this is just talking about the exact same thing with a different lense. The ontology cashes out as a single, event-based fabric in which energy flow, information flow, and felt value are three coordinate projections of the same underlying process of becoming. Process philosophy calls each pulse of that fabric an actual occasion, a moment that gathers its past, chooses a contrast, and perishes into the future. Entropic science calls the very same pulse a logically-irreversible operation that dissipates heat while pruning uncertainty. Change the descriptive lens and you emphasize aesthetic intensity; flip it and you emphasize bits and joules; but in either view the world is a ceaseless circuit of decisions that convert indeterminate potentials into determinate acts. Thus, far from competing ontologies, Landauer’s limit and Whitehead’s concrescence are complementary glosses on one ontic fact: reality is made of thermodynamically costly choices that simultaneously create meaning, expend energy, and reduce entropy.

Expand full comment