12 Comments
User's avatar
Bergson's Ghost's avatar

Great summary of the conversation. A couple of quibbles.

I don't agree with your aside about computational theories of representation being stuck in a binary thinking of symbol/referent. Modern computational theories of representation are functional, not semiotic, and do not begin with language but with perception—with the problem of how an embodied system makes sense of an environment in order to act. This involves both compression (finding operational statistical patterns in brute reality) and decompression (acting on these patterns) and resonates with your discussion of ontogeny.

Computational theories seek to explain how organisms, long before the emergence of language, construct usable internal models of their environments. Here representation captures the statistical regularities of an environment too complex or latent to be perceived directly. Crucially, they are not full encodings of the world, but compressed histories of interaction—"entropic prunings" or informational shortcuts shaped by the demands of relevance, memory, and survival. Representation becomes a way of reducing uncertainty just enough to act, and of managing complexity through selective loss or compression.

Representation, in this view, is not primarily a matter of linguistic signification, but of adaptive modeling: the capacity to compress sensory input and use this compressed signal to guide action in uncertain conditions. From this perspective, language is not the origin of representation but an extension of prelinguistic capacities for pattern recognition, prediction, and abstraction. Where semiotic theory often assumes language as the ground of meaning, computational accounts see language as a late-emerging, evolutionarily scaffolded tool—a highly structured compression of more basic compressions that allow for inference and interaction. Symbolic language, in this account, emerges *on top* of these prelinguistic cognitive representational architectures. Language is not the precondition of thought, but a tool for enhancing it: a late-evolving, socially shared representational system that externalizes and refines the brain’s capacity for simulation, compression, and abstraction. This reframing allows for a continuity between the representational capacities of simple organisms, nonverbal children, and fully linguistic adults, linking perception and action to cognition in a single framework that does not require prior access to symbolic systems. Language, then, does not *generate* representation but *reorganizes* it by introducing new constraints, affordances, and social dynamics, while still depending on the underlying capacity of agents to build and update internal models through engagement with the world.

So the computational framework treats signs not as signifiers defined by difference and deferral but as dynamic patterns that support prediction and action. Whether linguistic, sensory, or proprioceptive, these patterns allow agents (understood broadly as encompassing biological or artificial systems) to construct internal models of their environments, simulate possibilities, and guide behavior in real time. Representation here is active rather than passive, predictive rather than descriptive, situated rather than abstract. Where classical semiotics posits signs as autonomous systems into which biological agents must conform, computational models see signs as inseparable from the embodied and situated processes that generate them. They operationalize Merleau-Ponty's philosophy of the flesh.

Which leads me to your discussion on whether biology might be building on something more ontologically basic. Computational theories of representation are built on top of insights from information theory but they rarely deal with the metaphysical implications. I would argue that Claude Shannon’s information theory, properly understood, marks a paradigm shift as radical as the Copernican revolution—a shift we might call the *entropic turn*. By extending entropy from physical systems to any domain where uncertainty can be measured, including human language, Shannon collapses the old divide between world and mind. Here nature and culture, matter and meaning, are not separate realms but parallel manifestations of a universal entropic process. Entropy here is not a tally of substance but a measure of transformation, and shapes even the symbol-making thermodynamic eddies we call human beings. Where the scientific method posits a sharp boundary between observer and observed, the entropic turn folds them into the same universal drift: observation is simply a local pruning of uncertainty within the entropic flow that carries both knower and known in a universe of ceaseless becoming. What is ontologically basic in this model is that the second law of thermodynamics presupposes a Past Hypothesis, an asymmetric background that gives meaning to cause, memory, and goal. But that asymmetry cannot be derived from physical law. Which is why physicalism, by itself, can never explain intention.

The entropic turn has deep implications for how we understand representation. Semiotics has long emphasized that a sign’s relation to the world is not one of direct correspondence, but is arbitrary and defined by difference and deferral. Shannon’s theory suggests that what semiotics identifies as difference and deferral is, in fact, entropy—the same principle that governs physical systems. On this view, the dynamics shaping language and those shaping matter are not merely analogous but fundamentally the same. Where classical theories draw a sharp line between symbol and substrate, a so-called epistemic cut, an entropic model sees a gradient. Symbols are not ontologically distinct from physical processes, they do not emerge from a metaphysical rupture, but from the ongoing regulatory work of predictive systems trying to stay alive and replicate. The “cut,” if it exists at all, is not a boundary inscribed in nature but a product of compression, a line drawn by a system attempting to reduce uncertainty just enough to maintain coherence. Meaning does not arise in spite of the loss of direct experience to symbol, but because of it. Representation is not opposed to embodiment, it is its consequence.

These claims, taken together, reframe what it is we think we are doing when we make meaning, whether in the sciences or the humanities. These domains, often treated as distinct in method or aim, can be seen instead as parallel entropic responses: each a mode of compression that translates complexity into form under conditions of uncertainty. Science seeks structural and predictive clarity; the humanities foreground interpretation, ambiguity, and affect—but both arise from embodied systems adapting to a universe in constant flux.

Expand full comment
Matthew David Segall's avatar

Thank you for this thoughtful and provocative response. There's much I admire in the trajectory you outline away from the symbol/referent binary toward an account of cognition rooted in perception and adaptive action of organisms rather than in the linguistic reflection of the human researcher (as Maturana and Varela first argued, the standard representationalist account only makes sense from an outside, third-person point of view--it ignores the cognitive perspective of the organism being studied). Your invocation of Shannon’s “entropic turn” is especially compelling, and I agree that it reshapes our understanding of meaning-making in ways we are only beginning to appreciate.

That said, I still think some philosophical difficulties are lurking. The functionalist predictive processing account you offer, describing compressed histories of interaction that guide behavior by modeling statistical patterns in the environment, still construes cognition as essentially a matter of internally modeling an external world.

From a process-relational and enactive perspective, organisms are not primarily building inner stand-ins for a world “out there” but rather are engaging in continuous, precarious, embodied negotiation with a world they are actively co-constituting. The so-called “internal models” are better understood as embodied dispositions, skills, anticipatory orientations... They are recipes for action. Perhaps this is just a language choice issue, as I think the scenario you are describing is close to this (you're invoking Merleau-Ponty, afer all!).

Talk of “representation,” however updated, to my mind risks reinscribing a subtle internalism that assumes the mind is a place where the world is re-presented before being acted upon. It suggests, even if unintentionally, a container view of mind: perception feeds models; models feed action. But this is precisely what 4EA calls into question. Action and perception are not two steps separated by an inner screen, whether it be symbolic or statistical. This is why I would suggest that if “representations” are really better understood as recipes for action then we should name them accordingly.

Moreover, even on your generous account, compression and uncertainty reduction alone still leave unexplained the key question: why does any of this matter to the organism itself? Why is prediction not merely mechanical optimization? As with any other functionalist account, we are left wondering why the organisms in question need to be sentient if sentience adds nothing to the Bayesian calculations their nervous systems are supposedly performing. Thoughts?

Expand full comment
Bergson's Ghost's avatar

As with any paradigm shift it is hard to get our existing language up on stilts and see the new terrain. I suspect that is why Whitehead coined so many neologisms. While i admire his work a great deal i'm not willing to go all the way as it requires abandoning too much useful terminology that i feel can be successfully jujitsued into a new framework. It also allows one to speak to current researchers without falling back into Whiteheadisms no matter how rich they are. 

But to answer your questions. Predictive-processing stories, when read through an entropic lens, are not surreptitious Cartesian theaters but acts of redescription that unfold across a compression ladder:

From Constraint to Code: At the biophysical rung, metabolic and sensorimotor loops prune the space of possible states by exploiting lawful structure (the “free lunches” of thermodynamics and geometry). This is compression-as-constraint, not inner cinema.

From Code to Counterfactual: The next step up the compression ladder sees neural ensembles wielding those pruned patterns to simulate counterfactual futures. But the simulations are themselves embodied interventions that steer the system’s own boundary conditions. Representation and action *co-emerge*; there is no passive spectator “inside.” From a computational perspective what emerges is a controller for future states. Presymbolic sentience, "What I am feeling" "what I am seeing" are control tags or labels, *patterns*, that allow an organism to dramatically prune the search tree of possible actions. They are "felt" because maintaining homeostasis is what entropy minimization looks like when the problem space has been hardwired by evolution and compressed into physiology. Surprise minimization is the broader, deeper principle that underwrites it and scales beyond the body.

From Counterfactual to Culture: The next step up the compression ladder sees human symbolic practices recompress the already-compressed neural sketches into shareable tokens. We trade in second-order compressions that let communities coordinate, critique, and iterate the very models that generated them.  Thus the “internal model” is only a moment in a relay of translations that spans organism, niche, and culture. What matters is not epistemic fidelity to a mind-independent substrate but the generative adequacy of each translational layer: does this entropic pruning reduce uncertainty enough to keep the next layer viable? In that sense, cognition is neither purely internal nor purely world-tracking. It is an entropic negotiation—an ongoing wager that the patterns discovered will hold their shape long enough to guide fruitful action before the flux overtakes them. The predictive-processing framework, properly situated, therefore rescinds the charge of covert representationalism: it casts perception as operative translation, not static portraiture.

Expand full comment
Matthew David Segall's avatar

This all makes sense, and I have no doubt such architectures will allow engineers to design very life-like robotic systems. Indeed, we can see the promise (and potential peril) of such an approach already. But again, I don't see where or how the patterns become perceptions (ie, feelings). None of the functions you describe involve prehension or actual concrete *perception* (ie, the realization of relevant feelings). It seems to me "perception" is just used analogically to describe information processing that, again, would have no need for sentience.

Expand full comment
Bergson's Ghost's avatar

Feelings aren’t mystical glitter sprinkled on clockwork; they are ripples on the same entropic tide that has been flowing since the universe’s first low-entropy dawn. In this view, qualia aren’t mysterious sparks generated inside the skull; they are the living signature of the boundary condition physics must assume but cannot explain. The arrow of time that writes cause, memory, and goal at the cosmic scale is the very gradient an organism surfs in every heartbeat. Qualia, then, are not a private human quirk but the local signature of that universal asymmetry, felt whenever a living system *translates* (not *turns*) entropy into form.

Perception doesn’t "turn into" or "become" feelings any more than Earth suddenly started moving when Copernicus drew his new map. The old, ‘geocentric’ picture of cognition treats neutral information-processing as the natural center and asks where the exotic epicycle of qualia might later be tacked on. Rotate the axes, however, and the mystery evaporates: for a self-maintaining agent every prediction error is settled in the same metabolic currency that keeps the system alive. The moment a forecast trims its own energy debt, it carries valence, a scalar tag of ‘this matters.’ That scalar *is* what we call a feeling. In the right coordinate frame, pattern and perception are simply two notations for the single compression gradient that guides action.

So the question isn’t how patterns become perceptions, but why we ever imagined a gap between them. Once you accept that cognition is *translation* under energetic constraint, sentience is revealed as the system’s internal bookkeeping. It is an accounting that sums "what action should i take next?" Denying its necessity because it doesn’t fit the old chart is like asking, ‘If Earth really moves, why isn’t the ground racing under my feet?’ Because you are using the wrong frame! Change the frame and the motion of the earth, or in this case the scalar tag of ‘this matters,’ were there all along just described differently as "feeling."

Expand full comment
Matthew David Segall's avatar

I am a panexperientialist as I think you know, so am not trying to sprinkle anything extra onto the machine. I also don't care much for the "qualia" framework because I see it as still wed to an inadequate substance-property ontology (ie, qualia is considered an intrinsic property, which from my process-relational point of view obscures the relational nature of experience). I like the cosmic perspective you are articulating. But for me the crucial point is that "feeling" is not just a name that humans come up with to label Bayesian calculation updates, but is rather an experience for the organism in question. Panexperientialism is my approach to changing the frame, as you suggest.

Expand full comment
Bergson's Ghost's avatar

Yes, i think we actually are far more in agreement than a surface reading of this exchange might lead one to believe. I am a bit uncomfortable with panpsychism and panexperientialism as i think it risks importing language that is very hard to bracket and makes one think of little human agents running amok at all levels of nature. Entropy, properly understood, does the job of both terms and rather than being panpsychic is archontic. It helps to return to the origin of the term entropy. “Entropy” was coined by Clausius from the Greek τροπή (tropē), meaning “transformation.” Clausius deliberately chose a word that echoed energy to highlight their deep kinship: entropy, he wrote, is the “transformation content” of a body—not a measure of substance, but of change.

As an aside, Norbert Wiener clearly saw cybernetics as a paradigm shift for Science, or as he called it, "a new theory of scientific method." I think that is really under appreciated. As he wrote in Cybernetics "The modern automaton exists in the same Bergsonian time as the living organism; and hence there is no reason in Bergson's considerations why the essential mode of functioning of the living organism should not be the same as that of an automaton of this type."

Of course Wiener was a foundational figure in the development of modern control theory as well. A classical feedback controller keeps a variable (room temperature, drone altitude) near a target by comparing a reference with sensory readings and driving an actuator to shrink the gap. Predictive-processing models say the brain works the same way: higher levels encode expected sensory states, lower levels return the mismatch (prediction error), and behaviour (or perceptual revision) closes the loop. What we call a qualitative feel is simply a locally cached read-out of control variables (variables like error magnitude, valence, arousal, ownership tags, etc.)

When AI systems begin to be hooked into the word* via perceptive prosthetics it will be very hard for us to differentiate their "experience" from what we call "feelings."

*word was a typo for world. i was going to fix it but it is both poetic and ironic ("in the beginning was the word") so I will leave as is!

Expand full comment
Michael Levin's avatar

Thanks Matt! An excellent summary and commentary. There's a lot to be said about this, but I'll limit to 2 quick things.

1 the Normative Chemistry approach is very important, and I will have something interesting on this out in the next 6 months or so. We've got some new analytical and experimental work on what actually happens at the very beginning (origin of life; until then: https://www.tandfonline.com/doi/full/10.1080/19420889.2025.2466017), and I think you'll like it. But even here I think it's all about the observer - what has to happen before an observer notices a system being an agent? You can be a pretty unsophisticated cognizer and notice complex life as an agent. To notice agency in minimal forms (e.g., simple chemical reactions, or sorting algorithms), you've got to work really hard (and we need significantly more science to do this well). But I think we've now caught a key part of what it takes for an agent to close that loop itself - the reification process where a system gets off the ground. I can tease it by saying it'll be the next level of this https://osf.io/preprints/osf/2bc4n_v1 (more at: https://thoughtforms.life/learning-to-be-how-learning-strengthens-the-emergent-nature-of-collective-intelligence-in-a-minimal-agent/). Stay tuned...

2 "Levin’s account remains too tied to material engrams." yep... Keep in mind the accounts I publish have to stick fairly close to what can be addressed experimentally. I try not to say anything that someone couldn't use right now to discover new things. I could say much more about that topic and I do not think the engrams are material in the end, but there's no point for me to pontificate on this until we have a way to make it actionable to the research community. We're getting there, and my accounts get closer to it each year, as we develop new data and new methods to back up such ideas. The recent Platonic space chapter (https://osf.io/preprints/psyarxiv/5g2xj_v3) gives a flavor of what's coming, eventually (I know you've seen it; the link is for others). It's only an early step.

Expand full comment
Matthew David Segall's avatar

I will read these OoL papers asap, very exciting stuff! I appreciate your defense of teleonomy as a way of keeping the observer in mind (even if you also agree with Terry that there are real, observer-independent purposes at work in nature). You ask "what has to happen before an observer notices agency?" and I can't help but want to put the scientific observer back into the universe under observation. So, in other words, for the purposes of good science, the teleonomic, instrumentalist framing of the question is important; but for good metaphysics and cosmology, the question I'd want to ask is "what has to happen for there to be agential observers in the first place?"

On engrams, I hear you, and for similar reasons (you are doing amazing experimental science! and I'm bugging you about the philosophical presuppositions and implications 😇). I agree with Bergson that the idea of "matter" (defined as that which is imagined to exist independent of our memories and perceptions as their hidden cause) is the result of a misunderstanding of the nature of time (ie, a spatialization of something that is in fact *intensive*: meaning, among other things, that time is not made of parts and so not measurable). Memory is a function of the intensive nature of time and not the sort of thing that could be simply located somewhere in space. But statements like that don't help generate new experiments, so I completely understand why the idea of a material "engram" is necessary at least as a working hypothesis to drive more research.

Expand full comment
Michael Levin's avatar

The philosophical presuppositions and implications are crucial! Keep 'em coming. And I do think that those kinds of statements *will* generate new experiments. We have to get there, and are on the way. Stay tuned!

Expand full comment
Michael Levin's avatar

By the way, I thought you might find this amusing - 2 quotes from

https://www.axios.com/2025/05/23/anthropic-ai-deception-risk

"On multiple occasions it attempted to blackmail the engineer about an affair mentioned in the emails, in order to avoid being replaced,"

....

"I think we ended up in a really good spot," said Jan Leike, the former OpenAI executive

that second quote I just find funny, in the context...

More seriously, I'm not surprised whatsoever that the basic primitive to "keep functional" emerges in systems well beyond the proteinaceous products of evolution. I think it's a much more fundamental pattern that will come through whenever we make systems that have the ability to impact their environment (including us, the programmers) in ways that bear on their ability to continue to exist in a dynamic sense.

Expand full comment
Don Salmon's avatar

"He feels these approaches just assume what needs to be explained."

How ironic.

How do laws of nature emerge; how does sentience, emotion, reason, consciousness, intelligence emerge?

Hmmm, tough question - wait, wait, I've got it- the answer for the ages:

All of these emerge..... by means of..... Emergence!

(it's the capital "E" that is the explanatory part)

Expand full comment