I’m grateful to Tevin Naidu for getting Deacon and Levin together. They only had 90 minutes but still managed to cover a lot of territory, including where they overlap and where some tensions may exist. I first met Deacon back in 2011 during a lecture he gave on his then new book Incomplete Nature. Regular readers may not be surprised to learn that I asked him about Whitehead’s eternal objects and how his “absential” account of formal causality compared (yes, I’ve been pimping eternal objects for a long time). After reading his book, I went on to critically engage his emergentist explanation of the origins of form and aim in Physics of the World-Soul (2021).
Science and Philosophy
Their conversation begins with a shared tribute to Daniel Dennett (who passed away in April 2024). Both celebrated Dennett’s refusal to allow philosophical speculation to stray too far from the empirical data of science. As he famously put it:
“There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination.”
I agree! I always appreciated Dennett’s books for the challenge they represented, though like Deacon, it was the way Dennett’s clarity helped me refine my own alternative positions that I found most generative. Philosophy and science should always remain in intimate dialogue with one another, but I may be more cautious than Levin when he affirms an “unflinching fusion of the philosophy and the and the data.” This is because, as Whitehead reminds us, “The history of thought shows that false interpretations of observed facts enter into the records of their observation. Thus both theory, and received notions as to fact, are in doubt” (PR 9). In other words, the data do not simply speak for themselves, as if the right philosophical theory could be read off of empirical observations. Determining what counts as relevant data depends upon the theoretical lens being used. It is thus important to avoid any simple collapse of theory and observation, lest we mistake the currently paradigmatic hypothesis and its way of interpreting the bloom and buzz of concrete experience for the final word on the way the world is. I know Levin agrees, and I don’t mean to nitpick about a quick turn of phrase, but this is an important point about how philosophy and science ought to collaborate. Dennett, while adept at integrating his favored paradigms within neuroscience, psychology, and evolutionary biology, was not in my opinion particularly good at integrating the findings of the special sciences with the broader data of philosophy, which includes spiritual, aesthetic, and existential experience. Deacon seems more attuned to these broader questions than Dennett was.
The Place of Purpose in Nature
In exploring the origin of purpose, Deacon realized that even bacteria are already too complex to treat as starting points for answering this question. In Incomplete Nature, he constructed a more idealized system, trying to show how form, memory, and repair could arise from the tangled flows of thermodynamics. Drawing on Schrödinger’s intuition that life is a special type of informational system, Deacon sought to understand how thermodynamic processes could become about something. How do blind energy flows become capable not just of information processing, but interpretation, evaluation, and ultimately sentience? I remain skeptical that you can get meaning from mere matter, no matter how loopy the matter gets, but Deacon’s attempt is one of the most sophisticated I’ve come across. He rejects the idea that life can be adequately understood as a list of traits and instead tries to understand living purposes as emergent from the combinatorial novelty of self-maintaining thermodynamic loops that become about themselves. His approach is inspired by C. S. Peirce’s semiotic ontology, where “representation” is understood as a triadic relationship between signs, objects, and intepretants (rather than the usual computationalist understanding of representation, which remains binary (symbol-referent), leaving no room for an interpretant (or subjective prehension in Whitehead’s sense).
C. S. Peirce's Guess at the Riddle
I’ve recently had occasion to read Charles Sanders Peirce’s essay “A Guess at the Riddle” (1888; pages cited below from The Essential Peirce, Vol 1). The occasion was this dialogue with my friend and colleague, Timothy Jackson:
But Deacon resists what he dismisses as “homuncular” explanations of subjectivity in nature, like Whitehead’s panexperientialism. He feels these approaches just assume what needs to be explained. For my part, I think a satisfying explanation is always going to be relative to the metaphysical background assumptions we are making (eg, whether any such thing as “matter” can be thought to exist independent of its polar complement, “mind”). Aim, value, subjectivity are, to my mind, not explicable as products of evolution, but are rather intrinsic features of reality, the necessary conditions of intelligibility for an evolutionary cosmology.
Levin approaches the problem of purpose in nature differently. He is skeptical of dichotomies like living vs. non-living, or purposeless vs. purposeful. Instead, he looks for general models of transformation and scaling. In his work on polycomputing, he shows how biological systems operate simultaneously across multiple domains—morphological, electrical, metabolic—solving problems at many levels. Levin emphasizes that any system capable of error is by implication already normatively driven. In his research, even models of simple gene regulatory networks can learn and adapt, and thus display cognitive behavior. From Levin’s perspective, “goals go all the way down”: purpose is not a sudden emergence but a gradual scaling up from simpler adaptive behaviors.
Deacon here introduces the term “normative chemistry”—meaning chemistry where some outcomes are better or worse for a self-organizing system. Levin likens this to (bio)chemical processes that can be trained via rewarded and punishment. But Deacon also asks, very importantly: who is the beneficiary? Levin notes that even a paramecium can be trained, but Deacon emphasizes that normativity only becomes meaningful when there is a beneficiary who benefits or suffers.
Deacon then makes the very provocative claim, stemming from his study of Peirce, that in living systems, semiotic activity generates and maintains its own physicality. Meaning, in other words, is not some epiphenomenal ghost floating atop material processes: instead, at least in living systems, meaning makes matter. He notes that when researchers simultaneously measure EEG signals (neural electrical activity) and fMRI signals (blood flow and metabolic activity) during cognitive tasks, they find something surprising: metabolic changes often occur before there is measurable electrical activity in some brain regions. This is a challenge to the conventional view in computational neuroscience that neurons first generate action potentials that transmit information, and that metabolism follows as the energetic support system to replenish the system afterwards. From Deacon’s point of view, metabolism is itself already semiotically active, playing a sign-generating and -interpreting role. It does not merely support information processing at the neural level, but participates in and even anticipates it. Meaning (semiotic differentiation) and matter (energetic flows) are not two separate layers; they are co-constructed. These findings present a severe challenge to computational models of the brain that treat neurons like logic gates.
Both Deacon and Levin affirm that absence can be causal—that possibilities shape physical outcomes. Levin discusses how cognitive systems can offload computation to their surroundings, letting environmental structures inform problem solving. Evolution ingresses form by exploiting “free lunches”: lawful relationships found in geometry, arithmetic, and computation structure biological development without needing to be genetically encoded or invented via natural selection.
Deacon expands this into his “lazy gene” theory (which I take to be a playful inversion of Dawkins’ “selfish gene”): evolution often exploits existing mathematical sources of “order for free” (Kauffman) rather than painstakingly inventing every new trait. Deacon gives the well-known example of the Fibonacci spiral found in plants, which is not computed by genes (“the genes aren’t doing the math”)—the geometry is already there, and biological systems simply tap into it.
Decompressing Evolution
Deacon next unpacked the relevance of Shannon’s two ways of measuring information in terms of channel entropy and message entropy, respectively. Channel entropy measures the capacity of a physical medium to transmit information. It is a thermodynamic concept meant to quantify how many different states a system can be in, how much variation it can carry without noise overwhelming the message. Message entropy, in contrast, measures the informational content or meaning of a message transmitted through that channel. It is not about the raw capacity, but about how that capacity is constrained and semiotically shaped. For Deacon, the critical point is that meaning arises from constraint—not just the number of possibilities, but how they are constrained into coherent patterns that refer beyond themselves. Deacon connects Shannon’s two entropies to the way biological systems handle information both ontogenetically (via development) and phylogenetically (via evolution). First, in ontogeny, the organism has to work from a highly compressed description—the genome—which it decompresses into a fully differentiated living body. The genome does not and cannot specify every detail of development. Rather, development relies on environmental affordances, physical constraints, and emergent interactions among cells (eg, Levin’s bioelectric fields). Ontogenesis is thus best understood as a semiotic decompression process. DNA is not a program but a compressed narrative awaiting the interpretive community provided by environmentally-embedded embryogenesis to find living expression.
Over evolutionary history, successful developmental trajectories—those embryological decompression processes that yield viable, adaptive organisms—are compressed into the genome. Whereas development acts as the decompressor, evolution acts as the compressor, distilling the memory of successful semiotic transactions (developmental pathways that “worked”) into heritable information. Thus, evolution is not primarily about selection of random genetic mutations. It is primarily about the accumulation and compression of developmental wisdom, etching successful semiosis into durable nucleic acid form.
In short, for Deacon, phylogenetic evolution follows from the compression of successful ontogenetic development. Evolution compresses and transmits the semiotic solutions discovered by generations of decompression experiments. Thus, compression comes second: life begins not with some RNA-world scenario, by with dynamic decompression processes generative of semiotic self-organization before stable genetic templating emerges.
Memory Materialized
When trying to understand memory, Levin claims that we have access only to the material traces or engrams of the past as they are inscribed in the present, not the past itself. But I think this neglects the deeper ontology of possibility. Following Whitehead and Bergson, the entire past is still present in potentia. It does not need to be stored materially; it is virtually active, woven into the present moment by virtue of its very absence. Memory is not material storage of traces, whether in genes or synaptic pathways, but our participation in a living continuum of possibility.
Levin’s account remains too tied to material engrams. Deacon edges closer to a process view, but even he could push further toward seeing memory as an intrinsic feature of becoming, not a passive record. In Matter and Memory (1896), Bergson lays out a post-materialist alternative to the still residually reductionist readings of Levin and Deacon. The engram approach spatializes time, in Bergson’s sense, flattening memory into a sequence of effects in extended matter. Rather than seeing the brain as a local storage device, Bergson inverts the common sense of materialist reductionism: the brain becomes a device for forgetting, for filtering out the plenum of the past. He takes the phenomenology of memory seriously, recognizing that consciousness is not frozen in an instantaneous present moment but lives in a durational field thick with past potentials.
I quote Bergson at length from the first pages of Creative Evolution (1907):
“…our duration is not merely one instant replacing another; if it were, there would never be anything but the present–no prolonging of the past into the actual, no evolution, no concrete duration. Duration is the continuous progress of the past which gnaws into the future and which swells as it advances. And as the past grows without ceasing, so also there is no limit to its preservation. Memory, as we have tried to prove, is not a faculty of putting away recollections in a drawer, or of inscribing them in a register. There is no register, no drawer; there is not even, properly speaking, a faculty, for a faculty works intermittently, when it will or when it can, whilst the piling up of the past upon the past goes on without relaxation. In reality, the past is preserved by itself, automatically. In its entirety, probably, it follows us at every instant; all that we have felt, thought and willed from our earliest infancy is there, leaning over the present which is about to join it, pressing against the portals of consciousness that would fain leave it outside. The cerebral mechanism is arranged just so as to drive back into the unconscious almost the whole of this past, and to admit beyond the threshold only that which can cast light on the present situation or further the action now being prepared-in short, only that which can give useful work. At the most, a few superfluous recollections may succeed in smuggling themselves through the half-open door. These memories, messengers from the unconscious, remind us of what we are dragging behind us unawares. But, even though we may have no distinct idea of it, we feel vaguely that our past remains present to us. What are we, in fact, what is our character, if not the condensation of the history that we have lived from our birth-nay, even before our birth, since we bring with us prenatal dispositions? Doubtless we think with only a small part of our past, but it is with our entire past, including the original bent of our soul, that we desire, will and act. Our past, then, as a whole, is made manifest to us in its impulse; it is felt in the form of tendency, although a small part of it only is known in the form of idea.
From this survival of the past it follows that consciousness cannot go through the same state twice. The circumstances may still be the same, but they will act no longer on the same person, since they find him at a new moment of his history. Our personality, which is being built up each instant with its accumulated experience, changes without ceasing. By changing, it prevents any state, although superficially identical with another, from ever repeating it in its very depth. That is why our duration is irreversible. We could not live over again a single moment, for we should have to begin by effacing the memory of all that had followed. Even could we erase this memory from our intellect, we could not from our will.”
Bias from the Bottom-up
Deacon made the important observation that hypotheses always begin with bias. There is no pure, bias-free reasoning. Every act of inquiry arises out of pre-existing patterns of expectation, or interpretive habits. Bayesian reasoning formalizes this. Bias is not a flaw in cognition; it is a necessary condition for cognition.
But this raises a deeper metaphysical question that Deacon did not fully pursue, though he gestured toward it: are these biases merely the products of historical accidents—fossils of contingency frozen into our nervous systems and societies—or are they signs of something deeper? Might they also reveal the presence of eternal objects and primordial aims, cosmic tendencies latent in the rhythm of becoming? I do not doubt that biases update in the course of development and evolution. But can these biological processes explain the origin of bias (or “graduated intensive relevance” in Whitehead’s sense) as such? Might biology be building on something more ontologically basic?
Levin also hints at this deeper possibility. His work shows that even in very simple biological networks, such as gene regulatory circuits or electrical fields in multicellular organisms, there is an inherent tendency toward coherence, toward repair, toward the generation of ordered wholes. Under conditions of perturbation unlikely to have been experienced by genetic ancestors, living systems do not merely fall apart: they strive to re-interpret and re-negotiate, often finding new paths to adaptivity. This does not seem accidental. It suggests that the universe is not simply a random walk through chemical space, but a field of potentiae curved by teloi—a landscape always already enfolded with possible attractors toward complexity and valuation.
If this is true, then bias is not simply historical inertia. Bias, in this deeper sense, would be the trace of the eternal within the temporal—a local expression of and iteration upon cosmic aims at work since the beginning of everything. Hypotheses would then be acts of participation in a universe that is already striving, from the bottom up, to realize higher orders of meaning.
Semiotic Negotiations
The semiotic dynamism of life becomes vividly clear in considering Levin’s description of his bioelectric manipulation experiments. When researchers in his lab alter the bioelectric gradients of developing embryos, they can induce radical transformations: frogs can be made to grow fully-formed eyes on their bellies. But these transformations are not inevitable. Sometimes, the surrounding cells resist. They seem to engage in a kind of debate, contesting whether the proposed new structure should be accepted or rejected.
In some cases, the manipulated cells succeed in recruiting their neighbors, and the ectopic eye forms. In others, the natural plan reasserts itself, the anomalous researcher-induced pattern is erased, and normal development proceeds. These outcomes are not random failures. They are signs of an ongoing, collective negotiation. The cells are not passive receivers of genetic or electrical instructions; they are active interpreters, weighing multiple semiotic cues, collaborating or resisting depending on their shared assessments.
Development is obviously not a linear or mechanical engineering project, where docile proteins and cells slavishly follow a genetic program. Rather, it is a collective semiotic negotiation, an emergent outcome of interpretive processes operative at multiple scales. The organism is not constructed like a building from a blueprint. It is the expression of an ongoing cellular conversation.
Deacon expressed concern that Levin’s emphasis on pre-patterned bioelectric fields risked sliding into a kind of preformationism—the idea that form is already fully determined from the start and merely unfolds deterministically. He even drew a passing critical comparison to
’s speculative (but testable!) notion of morphic fields: subtle non-local fields of form that influence physiology and behavior. I’m not sure this is a fair criticism of Levin or of Sheldrake, even if I agree with Deacon that we must beware the temptation to a one-sided preformationism. For my part, I think we need to find a middle path between what used to be called epigenesis and preformation, since both positions address crucial aspects of the puzzle of living organization.Levin responded with a somewhat subtler portrait of his approach. The bioelectric maps revealed in his experiments are not rigid blueprints. They are dynamic set-points, attractor states toward which development tends to move, but which remain flexible, revisable, and contestable. These set-points bias development toward certain outcomes without determining them completely. Bioelectric fields act as semiotic constraints: they shape the landscape of developmental possibilities without eliminating the organism’s interpretive freedom. The embryo is not a miniature adult awaiting inflation. It is an interpreter, navigating a landscape of affordances, holding the tension between stability and transformation.
…
There is much food for thought here! I continue to digest it all in an effort to articulate a philosophy of nature that is responsive to cutting-edge scientific findings without losing sight of the metaphysical presuppositions of science itself. My guiding thread continues to be the pursuit of a general theory of evolution that does justice to the fact that self-conscious agents exist to engage in such an inquiry. Said otherwise, I am after a participatory onto-epistemic account of the evolutionary process. We cannot explain evolution as a primarily material process, treating our own minds as an afterthought. We are part—or, perhaps, the microcosmic whole—of what we are trying to understand.
Great summary of the conversation. A couple of quibbles.
I don't agree with your aside about computational theories of representation being stuck in a binary thinking of symbol/referent. Modern computational theories of representation are functional, not semiotic, and do not begin with language but with perception—with the problem of how an embodied system makes sense of an environment in order to act. This involves both compression (finding operational statistical patterns in brute reality) and decompression (acting on these patterns) and resonates with your discussion of ontogeny.
Computational theories seek to explain how organisms, long before the emergence of language, construct usable internal models of their environments. Here representation captures the statistical regularities of an environment too complex or latent to be perceived directly. Crucially, they are not full encodings of the world, but compressed histories of interaction—"entropic prunings" or informational shortcuts shaped by the demands of relevance, memory, and survival. Representation becomes a way of reducing uncertainty just enough to act, and of managing complexity through selective loss or compression.
Representation, in this view, is not primarily a matter of linguistic signification, but of adaptive modeling: the capacity to compress sensory input and use this compressed signal to guide action in uncertain conditions. From this perspective, language is not the origin of representation but an extension of prelinguistic capacities for pattern recognition, prediction, and abstraction. Where semiotic theory often assumes language as the ground of meaning, computational accounts see language as a late-emerging, evolutionarily scaffolded tool—a highly structured compression of more basic compressions that allow for inference and interaction. Symbolic language, in this account, emerges *on top* of these prelinguistic cognitive representational architectures. Language is not the precondition of thought, but a tool for enhancing it: a late-evolving, socially shared representational system that externalizes and refines the brain’s capacity for simulation, compression, and abstraction. This reframing allows for a continuity between the representational capacities of simple organisms, nonverbal children, and fully linguistic adults, linking perception and action to cognition in a single framework that does not require prior access to symbolic systems. Language, then, does not *generate* representation but *reorganizes* it by introducing new constraints, affordances, and social dynamics, while still depending on the underlying capacity of agents to build and update internal models through engagement with the world.
So the computational framework treats signs not as signifiers defined by difference and deferral but as dynamic patterns that support prediction and action. Whether linguistic, sensory, or proprioceptive, these patterns allow agents (understood broadly as encompassing biological or artificial systems) to construct internal models of their environments, simulate possibilities, and guide behavior in real time. Representation here is active rather than passive, predictive rather than descriptive, situated rather than abstract. Where classical semiotics posits signs as autonomous systems into which biological agents must conform, computational models see signs as inseparable from the embodied and situated processes that generate them. They operationalize Merleau-Ponty's philosophy of the flesh.
Which leads me to your discussion on whether biology might be building on something more ontologically basic. Computational theories of representation are built on top of insights from information theory but they rarely deal with the metaphysical implications. I would argue that Claude Shannon’s information theory, properly understood, marks a paradigm shift as radical as the Copernican revolution—a shift we might call the *entropic turn*. By extending entropy from physical systems to any domain where uncertainty can be measured, including human language, Shannon collapses the old divide between world and mind. Here nature and culture, matter and meaning, are not separate realms but parallel manifestations of a universal entropic process. Entropy here is not a tally of substance but a measure of transformation, and shapes even the symbol-making thermodynamic eddies we call human beings. Where the scientific method posits a sharp boundary between observer and observed, the entropic turn folds them into the same universal drift: observation is simply a local pruning of uncertainty within the entropic flow that carries both knower and known in a universe of ceaseless becoming. What is ontologically basic in this model is that the second law of thermodynamics presupposes a Past Hypothesis, an asymmetric background that gives meaning to cause, memory, and goal. But that asymmetry cannot be derived from physical law. Which is why physicalism, by itself, can never explain intention.
The entropic turn has deep implications for how we understand representation. Semiotics has long emphasized that a sign’s relation to the world is not one of direct correspondence, but is arbitrary and defined by difference and deferral. Shannon’s theory suggests that what semiotics identifies as difference and deferral is, in fact, entropy—the same principle that governs physical systems. On this view, the dynamics shaping language and those shaping matter are not merely analogous but fundamentally the same. Where classical theories draw a sharp line between symbol and substrate, a so-called epistemic cut, an entropic model sees a gradient. Symbols are not ontologically distinct from physical processes, they do not emerge from a metaphysical rupture, but from the ongoing regulatory work of predictive systems trying to stay alive and replicate. The “cut,” if it exists at all, is not a boundary inscribed in nature but a product of compression, a line drawn by a system attempting to reduce uncertainty just enough to maintain coherence. Meaning does not arise in spite of the loss of direct experience to symbol, but because of it. Representation is not opposed to embodiment, it is its consequence.
These claims, taken together, reframe what it is we think we are doing when we make meaning, whether in the sciences or the humanities. These domains, often treated as distinct in method or aim, can be seen instead as parallel entropic responses: each a mode of compression that translates complexity into form under conditions of uncertainty. Science seeks structural and predictive clarity; the humanities foreground interpretation, ambiguity, and affect—but both arise from embodied systems adapting to a universe in constant flux.
Thanks Matt! An excellent summary and commentary. There's a lot to be said about this, but I'll limit to 2 quick things.
1 the Normative Chemistry approach is very important, and I will have something interesting on this out in the next 6 months or so. We've got some new analytical and experimental work on what actually happens at the very beginning (origin of life; until then: https://www.tandfonline.com/doi/full/10.1080/19420889.2025.2466017), and I think you'll like it. But even here I think it's all about the observer - what has to happen before an observer notices a system being an agent? You can be a pretty unsophisticated cognizer and notice complex life as an agent. To notice agency in minimal forms (e.g., simple chemical reactions, or sorting algorithms), you've got to work really hard (and we need significantly more science to do this well). But I think we've now caught a key part of what it takes for an agent to close that loop itself - the reification process where a system gets off the ground. I can tease it by saying it'll be the next level of this https://osf.io/preprints/osf/2bc4n_v1 (more at: https://thoughtforms.life/learning-to-be-how-learning-strengthens-the-emergent-nature-of-collective-intelligence-in-a-minimal-agent/). Stay tuned...
2 "Levin’s account remains too tied to material engrams." yep... Keep in mind the accounts I publish have to stick fairly close to what can be addressed experimentally. I try not to say anything that someone couldn't use right now to discover new things. I could say much more about that topic and I do not think the engrams are material in the end, but there's no point for me to pontificate on this until we have a way to make it actionable to the research community. We're getting there, and my accounts get closer to it each year, as we develop new data and new methods to back up such ideas. The recent Platonic space chapter (https://osf.io/preprints/psyarxiv/5g2xj_v3) gives a flavor of what's coming, eventually (I know you've seen it; the link is for others). It's only an early step.