Discussion about this post

User's avatar
Bergson's Ghost's avatar

Great summary of the conversation. A couple of quibbles.

I don't agree with your aside about computational theories of representation being stuck in a binary thinking of symbol/referent. Modern computational theories of representation are functional, not semiotic, and do not begin with language but with perception—with the problem of how an embodied system makes sense of an environment in order to act. This involves both compression (finding operational statistical patterns in brute reality) and decompression (acting on these patterns) and resonates with your discussion of ontogeny.

Computational theories seek to explain how organisms, long before the emergence of language, construct usable internal models of their environments. Here representation captures the statistical regularities of an environment too complex or latent to be perceived directly. Crucially, they are not full encodings of the world, but compressed histories of interaction—"entropic prunings" or informational shortcuts shaped by the demands of relevance, memory, and survival. Representation becomes a way of reducing uncertainty just enough to act, and of managing complexity through selective loss or compression.

Representation, in this view, is not primarily a matter of linguistic signification, but of adaptive modeling: the capacity to compress sensory input and use this compressed signal to guide action in uncertain conditions. From this perspective, language is not the origin of representation but an extension of prelinguistic capacities for pattern recognition, prediction, and abstraction. Where semiotic theory often assumes language as the ground of meaning, computational accounts see language as a late-emerging, evolutionarily scaffolded tool—a highly structured compression of more basic compressions that allow for inference and interaction. Symbolic language, in this account, emerges *on top* of these prelinguistic cognitive representational architectures. Language is not the precondition of thought, but a tool for enhancing it: a late-evolving, socially shared representational system that externalizes and refines the brain’s capacity for simulation, compression, and abstraction. This reframing allows for a continuity between the representational capacities of simple organisms, nonverbal children, and fully linguistic adults, linking perception and action to cognition in a single framework that does not require prior access to symbolic systems. Language, then, does not *generate* representation but *reorganizes* it by introducing new constraints, affordances, and social dynamics, while still depending on the underlying capacity of agents to build and update internal models through engagement with the world.

So the computational framework treats signs not as signifiers defined by difference and deferral but as dynamic patterns that support prediction and action. Whether linguistic, sensory, or proprioceptive, these patterns allow agents (understood broadly as encompassing biological or artificial systems) to construct internal models of their environments, simulate possibilities, and guide behavior in real time. Representation here is active rather than passive, predictive rather than descriptive, situated rather than abstract. Where classical semiotics posits signs as autonomous systems into which biological agents must conform, computational models see signs as inseparable from the embodied and situated processes that generate them. They operationalize Merleau-Ponty's philosophy of the flesh.

Which leads me to your discussion on whether biology might be building on something more ontologically basic. Computational theories of representation are built on top of insights from information theory but they rarely deal with the metaphysical implications. I would argue that Claude Shannon’s information theory, properly understood, marks a paradigm shift as radical as the Copernican revolution—a shift we might call the *entropic turn*. By extending entropy from physical systems to any domain where uncertainty can be measured, including human language, Shannon collapses the old divide between world and mind. Here nature and culture, matter and meaning, are not separate realms but parallel manifestations of a universal entropic process. Entropy here is not a tally of substance but a measure of transformation, and shapes even the symbol-making thermodynamic eddies we call human beings. Where the scientific method posits a sharp boundary between observer and observed, the entropic turn folds them into the same universal drift: observation is simply a local pruning of uncertainty within the entropic flow that carries both knower and known in a universe of ceaseless becoming. What is ontologically basic in this model is that the second law of thermodynamics presupposes a Past Hypothesis, an asymmetric background that gives meaning to cause, memory, and goal. But that asymmetry cannot be derived from physical law. Which is why physicalism, by itself, can never explain intention.

The entropic turn has deep implications for how we understand representation. Semiotics has long emphasized that a sign’s relation to the world is not one of direct correspondence, but is arbitrary and defined by difference and deferral. Shannon’s theory suggests that what semiotics identifies as difference and deferral is, in fact, entropy—the same principle that governs physical systems. On this view, the dynamics shaping language and those shaping matter are not merely analogous but fundamentally the same. Where classical theories draw a sharp line between symbol and substrate, a so-called epistemic cut, an entropic model sees a gradient. Symbols are not ontologically distinct from physical processes, they do not emerge from a metaphysical rupture, but from the ongoing regulatory work of predictive systems trying to stay alive and replicate. The “cut,” if it exists at all, is not a boundary inscribed in nature but a product of compression, a line drawn by a system attempting to reduce uncertainty just enough to maintain coherence. Meaning does not arise in spite of the loss of direct experience to symbol, but because of it. Representation is not opposed to embodiment, it is its consequence.

These claims, taken together, reframe what it is we think we are doing when we make meaning, whether in the sciences or the humanities. These domains, often treated as distinct in method or aim, can be seen instead as parallel entropic responses: each a mode of compression that translates complexity into form under conditions of uncertainty. Science seeks structural and predictive clarity; the humanities foreground interpretation, ambiguity, and affect—but both arise from embodied systems adapting to a universe in constant flux.

Expand full comment
Michael Levin's avatar

Thanks Matt! An excellent summary and commentary. There's a lot to be said about this, but I'll limit to 2 quick things.

1 the Normative Chemistry approach is very important, and I will have something interesting on this out in the next 6 months or so. We've got some new analytical and experimental work on what actually happens at the very beginning (origin of life; until then: https://www.tandfonline.com/doi/full/10.1080/19420889.2025.2466017), and I think you'll like it. But even here I think it's all about the observer - what has to happen before an observer notices a system being an agent? You can be a pretty unsophisticated cognizer and notice complex life as an agent. To notice agency in minimal forms (e.g., simple chemical reactions, or sorting algorithms), you've got to work really hard (and we need significantly more science to do this well). But I think we've now caught a key part of what it takes for an agent to close that loop itself - the reification process where a system gets off the ground. I can tease it by saying it'll be the next level of this https://osf.io/preprints/osf/2bc4n_v1 (more at: https://thoughtforms.life/learning-to-be-how-learning-strengthens-the-emergent-nature-of-collective-intelligence-in-a-minimal-agent/). Stay tuned...

2 "Levin’s account remains too tied to material engrams." yep... Keep in mind the accounts I publish have to stick fairly close to what can be addressed experimentally. I try not to say anything that someone couldn't use right now to discover new things. I could say much more about that topic and I do not think the engrams are material in the end, but there's no point for me to pontificate on this until we have a way to make it actionable to the research community. We're getting there, and my accounts get closer to it each year, as we develop new data and new methods to back up such ideas. The recent Platonic space chapter (https://osf.io/preprints/psyarxiv/5g2xj_v3) gives a flavor of what's coming, eventually (I know you've seen it; the link is for others). It's only an early step.

Expand full comment
10 more comments...

No posts