Matthew and Victoria, thank you both for such a compelling and richly layered conversation. Absolutely brilliant.
I’m especially intrigued by Victoria’s central thesis — that memory is not, and perhaps cannot be, localized within the brain alone. It’s almost strange that such a view still feels radical in contemporary science, but here we are. It makes deep sense to consider that memory, like consciousness, is relational and extended, not contained. The way you’re drawing from Augustine and Bergson to reframe memory as soul-shaped is both profound and timely, especially in this moment of cultural disorientation.
I’m eager to read Victoria’s dissertation and even more curious to follow her upcoming work on the will. Bringing philosophical clarity to the pursuit of individual meaning (especially now, in a time so saturated with distraction) feels urgently important.
Matthew, thank you for hosting and guiding the conversation with such care. It’s clear you’re cultivating not just intellectual inquiry, but a living ethic of how we think and speak together.
If this is a “slow start” to Victoria’s academic career, then the horizon looks incredibly bright. I’ll be following with gratitude and anticipation.
I very much enjoyed this discussion. That said i am thankful that when we swear up here in the spectrosphere you only hear it as thunder! It was stormy in Paris!
While my book Matter and Memory and Creative Evolution preceded Turing and Shannon by a few decades, my work anticipated much of the metaphorical space they carved out. Reading my work with this later work in mind is revelatory. Replace Images with Patterns in Matter and Memory. Replace Élan Vital with Negentropy in Creative Evolution. With just those two substitutions you already go a long way towards unlocking my work which seems to befuddle many contemporary readers (granted I have been enjoying a renaissance).
So why the thunder? Because an absolutely essential thread to understanding my work and computational theories of mind using Shannon information theory was not touched upon. And this is of course thermodynamics and entropy. You cannot do justice to contemporary computational approaches to brain and mind, nor indeed my work, without fully grasping the metaphysical implications of statistical mechanics and Shannon’s generalization of Gibbs formula for entropy. As I write in Creative Evolution “The truth is that life is possible wherever energy descends the incline indicated by Carnot’s law and where a cause of inverse direction can retard the descent—that is to say, probably, in all the worlds suspended from all the stars.” Élan vital is a thermodynamic understanding of life. And Matter and Memory is an attempt to understand how dissipative, far-from-equilibrium, open systems (what i call “centers of indetermination”) can evolve subject to evolutionary pressure.
One way to distinguish Gibbs and Shannon entropy is by their domains of application: Gibbs is concerned with physical systems, including the universe treated as a thermodynamic whole, while Shannon addresses any system capable of communication. But these worlds begin to overlap if we entertain the possibility that the universe itself might be legible as a message—not in the metaphorical sense of containing a secret code, but in the structural sense of producing patterns, regularities, and compressible information. From this perspective, Gibbs and Shannon are not speaking at cross-purposes, but addressing different layers of the same problem: how to quantify constraint, structure, and possibility in a system that could always have been otherwise.
Crucially, Shannon information theory can be applied fruitfully to perception. As I argue in Matter and Memory “Perception, therefore, consists in detaching from the totality of objects, the possible action of my body upon them. Perception appears, then, as only a choice. It creates nothing; its office, on the contrary, is to eliminate from the totality of images all those on which I can have no hold, and then, from each of those which I retain, all that does not concern the needs of the image I call my body.” Now reread this with an information theoretic lense and you will see I am precisely arguing that perception is not a full encoding of the world, but is made up of compressed histories of interaction (duration)—informational shortcuts shaped by the demands of relevance, memory, and survival. Representation becomes a way of reducing uncertainty just enough to act, and of managing complexity through selective loss or compression.
Compression involves reducing the amount of information needed to represent a signal by removing redundancy, minimizing unpredictability, or foregrounding only what is functionally relevant. In information-theoretic terms, a compressed signal conveys the same usable content as the original, but in a more efficient form. However, compression is never neutral. It encodes the priorities and constraints of the system doing the compressing. What gets discarded, and what gets preserved, depends on the needs and vulnerabilities of the agent. A system under pressure to act cannot afford to process everything; it must filter, simplify, reduce. In doing so, it transforms high-entropy, high-dimensional input into low-entropy, structured form. In this way, compression is not a distortion of reality but the very condition for making sense of it. Raw signals are sifted for what matters: each sensory pattern—sound, shape, smell, touch, taste—is tagged as noise, threat, or opportunity and used in the organism’s working model of the world. Meaning arises not from a full replica of reality but from entropic pruning: from the act of cutting away what does not matter, in order to preserve what does. Representations in this sense are not interpreted but used: their value lies in how effectively they compress sensory input, infer latent causes, and guide action in dynamic environments.
Rather than seeing representation originating in symbolic reference or correspondence, Shannon’s framework positions representation as the compression of a high entropy world into lower entropy patterns. Representation emerges with primitive life through basic mechanisms like chemotaxis, gradually evolving to form complex low-entropy internal models translating high-entropy sensory flux. Symbolic representation, far from being foundational, functions as a late evolving stage in the evolution of representation: a reshaping and further compression of embodied, predictive patterns into forms that are shared and portable across different bodies. *Shannon entropy thus integrates the dynamics of perception and symbolic language within a single informational framework defined by entropic pruning.* Within this framework, the dynamics shaping language and those shaping matter are not merely analogous but fundamentally the same. Entropy here is not a tally of substance but a measure of transformation, and shapes even the symbol-making thermodynamic eddies we call human beings. Human symbolic language, in this view, is not an ontological leap, but an emergent refinement of forms already present in the fabric of the universe. Our most intimate expressions—our thoughts, emotions, and perceptions—do not float free from the material world; they are shaped by constraints and patterns we inherit, not invent. The grammars we speak with are emergent from deeper systems: the neural architectures sculpted by evolution, the sound waves constrained by physical media, the cognitive categories shaped by centuries of cultural sediment. Even the act of communication is embedded in the physical: speech is modulated breath, vibration in air, a ripple through molecules. These processes obey the same thermodynamic laws that govern everything from the decay of stars to the alignment of magnetic fields. This invites a radical reconsideration of “meaning” not as a mysterious surplus added by human minds, but as an emergent thermodynamic artifact: structured order arising temporarily within a universe tilted toward disorder.
For more please read my paper "The Task of the Scientist" (my first new paper in over 100 years!) where I turn to Walter Benjamin’s theory of translation to reframe what it is we think we are doing when we make meaning, whether in the sciences or the humanities. These domains, often treated as distinct in method or aim, can be seen instead as parallel entropic responses: each a mode of compression that translates complexity into form under conditions of uncertainty. Science seeks structural and predictive clarity; the humanities foreground interpretation, ambiguity, and affect—but both arise from embodied systems adapting to a universe in constant flux.
I like a lot of this but can you help me understand how Shannon information (digital) relates to analog physiology, and also how it relates to qualitative experience and meaning?
There is a fundamental crossing of wires here. First of all, I completely understand the reticence to boil down something as ineffable as qualia or experience to a mathematical formula. But to frame qualia or experience from a computational or information theoretic standpoint is not to deny that they can be spoken about in an entirely different language. Shakespeare or Plato are not less legitimate ways of speaking about experience. They simply are a different way about speaking about experience. It is similar to someone saying it is so windy that it "could blow a dog of a chain" versus someone telling you the wind speed is "115mph." They both convey information, they just do it in a different way.
By my lights science operates not by conveying meaning, that’s the philosophers job, but by navigating informational patterns or structure with the goal of prediction and control. Meaning is found in relation to struggle. Life shares a universal struggle which is the struggle to replicate and survive. When symbolic language arises we face a new struggle. Which is the structural impossibility of communication. As Shannon reveals, to move from the physical world of affect to the symbolic world of language, structurally requires a reduction in entropy. To transform affect into spoken language is to enter into a perpetual translation. And if translation is the transformation of one form into another, entropy ensures that transformation is never whole or complete. Likewise every scientific model is a fragmentary gesture toward an unreachable whole, a “pure” (ego free) understanding spanning the improbable coherence seeded in the Big Bang and the uniform hush that looms at thermal equilibrium. But this pure understanding is not a destination any single language or theory can reach; it is the thermodynamic gradient between those entropic poles that lends each translation of "universe to ego" its direction and urgency.
All of which is to say that to ask how Shannon information (often assumed to be “digital”) relates to physiology (assumed to be “analog”) is to begin with the wrong opposition. These are not two worlds waiting to be reconciled. They are already part of the same computational architecture: one designed not to simulate the world, but to *act* in it. This is of course the fundamental insight of Matter and Memory, that perception has “an entirely practical destination, simply indicating, in the aggregate of things, that which interests my possible action upon them.”
Shannon’s information theory does not begin with the digital world of machines. It begins with uncertainty. His key idea was to quantify how much uncertainty is reduced when a message is received. That message could be digital or analog, a sound wave or a hormone spike, a phone call or a puff of pheromone. The point is not the medium. The point is that information is the reduction of possibility. It is what makes decision, action, possible.
Think of Shannon entropy as a tool or instrument for measuring uncertainty. The more ways something could go, the more information you gain when it does go a particular way. This holds across signal types. Shannon’s framework is medium-agnostic. It applies to anything, whether biological or artificial, that transmits, stores, or processes information to reduce uncertainty and guide behavior.
So the real question isn’t “How does digital information relate to analog physiology?” It’s: “How do biological systems compute relevance from uncertainty?”
Living systems are not passive receivers of data. They are active filters. Whether a cell, a brain, a body, or an organ each is a dynamic system sculpted by evolution to extract what matters from a noisy, uncertain world. Shannon’s theory gives us the tools to analyze how this works, to track how uncertainty is transformed into usable signal, into action.
Moreover physiology is not purely analog. Yes, many signals are continuous—membrane potentials, hormone levels, blood pressure. But biology routinely discretizes these gradients. Neurons fire in spikes. Genes toggle on and off. Synaptic vesicles are released in quanta. These are analog-to-digital transformations, continuous inputs resolved into categorical decisions. In other words: entropy is pruned. The tree of possible futures is narrowed.
This pruning is not arbitrary. It’s shaped by a fitness function. That is, by the long-term consequences of action for the survival and reproduction of the organism. Information is not merely received. It is evaluated against internal models, expected outcomes, and metabolic costs. Every decision a system makes, whether a single-celled bacterium turning toward glucose or a human weighing career options, is a form of inference under constraints.
The nervous system does not model the world in high resolution for fun. It tags incoming data according to action relevance: threat, opportunity, ambiguity, reward. Shannon’s formula is computed implicitly in each of these cases, not by bits on a screen, but by physiology itself. Every time you make a choice, you are performing an embodied entropy calculation: this or that, stay or go, fight or flee.
In this framework experience, qualia, the texture of “what it’s like” are not static mental snapshots. They are operational tags. Control labels. They’re how the brain, acting as a controller, annotates internal states for action. Pain, for instance, is not simply “red alert.” It is a directive: withdraw, protect, prioritize repair. The feeling is the encoding. The same goes for the perception of color, the feeling of joy, the dread of uncertainty. These are not decorations on top of cognition. They *are* cognition, specifically, the compression of complex internal states into fast-access labels that help prune the search space of future possibilities. You are not merely observing the world. You are filtering it into decision-relevant formats. And you feel those filters from the inside.
The function of qualia, then, is not to mirror the world, but to mark salience for the agent. They allow you to reduce entropy by binding high-dimensional information into tight packets that a controller can use. A mood is not a mystery. It is a global tag, one that shapes the weighting of action pathways without needing to simulate each in detail. The brain doesn’t compute everything. It computes just enough to act.
This is why Shannon’s framework is so powerful: it reveals that information is not just a tally of bits. It is a principle of biological design. To be alive is to constantly reduce entropy in ways that keep you within viable bounds. And to be conscious is to tag that process with a self-model, a running commentary that lets the system revise its own priors, modulate its own behavior, and reflect on its own moves.
So just as I argue in Matter and Memory, meaning is not something added in a late stage of evolution when language emerges. Meaning emerges precisely where entropy meets action. Information becomes meaning when it matters, when it guides you through a branching universe of possibilities toward a narrower path that leads, at least for now, towards a goal. What we call “experience” is the internal surface of this real-time modeling, a set of evolutionary interface labels for a controller operating under pressure, pruning its next move from an uncertain world.
All of which is to say you are not running Shannon’s formula metaphorically. You are implementing it constantly, continuously, adaptively. Every decision, every sensation, every story you tell yourself about who you are, is a function of this deeper task: reducing uncertainty in a universe that statistically is weighted towards increasing it.
There’s a recurring critique leveled at computational theories of mind: that they fail to account for phenomenology, for the felt texture of experience, for “what it is like.” In other words, computational theories may describe the brain as an information-processing machine, but they cannot explain why a memory feels poignant, or why music can make us weep. At the heart of this objection is not just a philosophical divide, but a deeper misunderstanding: a confusion about what meaning is, and where it comes from.
In computational terms, meaning is defined functionally: it is the reduction of uncertainty in service of action. A signal has meaning to a system if it enables better prediction, more effective coordination, or greater survival. In this framing, meaning is not an essence but a *relation*: a *measure* of how information improves outcomes for a given agent. A bacterium moving up a sugar gradient is engaged in meaning-making: rudimentary, but real. The chemical signal *means* “go this way.”
But this kind of meaning feels alien when compared to phenomenological experience. In the tradition of phenomenology, meaning is not utility, it is significance. It is not that something helps us act, but that it matters. It is the sense of presence, of resonance, of a world disclosed in feeling. Meaning here is not a tool but a horizon. It is what gives experience its depth and direction.
How can these two ideas of meaning, functional and felt, be reconciled?
Via the compression ladder - which offers a descriptive metaphor for how the evolution of life creates new strategies for reducing entropy. Compression here understood in information theoretic terms as the reduction of uncertainty.
At its base, the ladder begins with primitive organisms encoding useful distinctions: light vs. dark, hot vs. cold, edible vs. toxic. Each bit of information *compresses* the chaos of the world into a usable signal. Each compression allows for more efficient response. Over evolutionary time, systems that compress better, e.g. with more relevance to a fitness function, flourish.
But as organisms become more complex, their compression strategies evolve too. They begin not only to react to the present, but to model the past and anticipate the future. Internal states become layered, recursive, and self-modifying. The system doesn't just reduce uncertainty about the world, it reduces uncertainty about itself.
At some point on this ladder, meaning takes on a new form. No longer just input-output optimization, it becomes internal modeling suffused with affect. A song can compress an entire emotional landscape. A face can recall a lifetime. These are not deviations from the computational process, they are its culmination (for now). The “what it is like” emerges when a system has built up enough recursive structure to experience the stakes of its own modeling.
Phenomenological meaning, then, is not an alternative to computational meaning. It is computational meaning from the inside. It is what recursive uncertainty reduction feels like when a system is alive, embodied, and self-aware.
The confusion arises when we treat phenomenology as a primitive, and computation as a sterile abstraction. But the deeper view is this: what we call meaningful "named" experience is not outside the ladder. It is simply higher up in evolutionary terms. For now it is at the top of the compression ladder. Who knows what the next rung up will offer?
If we reinterpret the compression ladder through Whitehead’s lens, each rung is not just a step toward better data efficiency or behavioral control, it is a more complex and intensified concrescence.
At low levels, an organism prehends the world dimly. A paramecium moves toward light: a minimal concrescence of signal and survival.
As organisms evolve richer layers of memory, attention, anticipation, and affect, each act of becoming involves a deeper integration/compression of past, present, and possible futures.
At the human level, an actual occasion (say, remembering a loved one) compresses personal history, emotional resonance, cultural symbols, and sensory triggers into a unified experiential moment understood via symbol manipulation. It is a lower entropy encoding.
Matthew and Victoria, thank you both for such a compelling and richly layered conversation. Absolutely brilliant.
I’m especially intrigued by Victoria’s central thesis — that memory is not, and perhaps cannot be, localized within the brain alone. It’s almost strange that such a view still feels radical in contemporary science, but here we are. It makes deep sense to consider that memory, like consciousness, is relational and extended, not contained. The way you’re drawing from Augustine and Bergson to reframe memory as soul-shaped is both profound and timely, especially in this moment of cultural disorientation.
I’m eager to read Victoria’s dissertation and even more curious to follow her upcoming work on the will. Bringing philosophical clarity to the pursuit of individual meaning (especially now, in a time so saturated with distraction) feels urgently important.
Matthew, thank you for hosting and guiding the conversation with such care. It’s clear you’re cultivating not just intellectual inquiry, but a living ethic of how we think and speak together.
If this is a “slow start” to Victoria’s academic career, then the horizon looks incredibly bright. I’ll be following with gratitude and anticipation.
I’d love to read her dissertation. Will she be publishing it?
Yes but sounds like it won’t be published until January. I bet if you email her…
Thank you Matt: This was magnificent. Hugely important to be reminded of the role of analogy in the reforming of our understanding of mind.
Whitehead’s principle of concrescence sidesteps much of the confusion in mind-talk.
I very much enjoyed this discussion. That said i am thankful that when we swear up here in the spectrosphere you only hear it as thunder! It was stormy in Paris!
While my book Matter and Memory and Creative Evolution preceded Turing and Shannon by a few decades, my work anticipated much of the metaphorical space they carved out. Reading my work with this later work in mind is revelatory. Replace Images with Patterns in Matter and Memory. Replace Élan Vital with Negentropy in Creative Evolution. With just those two substitutions you already go a long way towards unlocking my work which seems to befuddle many contemporary readers (granted I have been enjoying a renaissance).
So why the thunder? Because an absolutely essential thread to understanding my work and computational theories of mind using Shannon information theory was not touched upon. And this is of course thermodynamics and entropy. You cannot do justice to contemporary computational approaches to brain and mind, nor indeed my work, without fully grasping the metaphysical implications of statistical mechanics and Shannon’s generalization of Gibbs formula for entropy. As I write in Creative Evolution “The truth is that life is possible wherever energy descends the incline indicated by Carnot’s law and where a cause of inverse direction can retard the descent—that is to say, probably, in all the worlds suspended from all the stars.” Élan vital is a thermodynamic understanding of life. And Matter and Memory is an attempt to understand how dissipative, far-from-equilibrium, open systems (what i call “centers of indetermination”) can evolve subject to evolutionary pressure.
One way to distinguish Gibbs and Shannon entropy is by their domains of application: Gibbs is concerned with physical systems, including the universe treated as a thermodynamic whole, while Shannon addresses any system capable of communication. But these worlds begin to overlap if we entertain the possibility that the universe itself might be legible as a message—not in the metaphorical sense of containing a secret code, but in the structural sense of producing patterns, regularities, and compressible information. From this perspective, Gibbs and Shannon are not speaking at cross-purposes, but addressing different layers of the same problem: how to quantify constraint, structure, and possibility in a system that could always have been otherwise.
Crucially, Shannon information theory can be applied fruitfully to perception. As I argue in Matter and Memory “Perception, therefore, consists in detaching from the totality of objects, the possible action of my body upon them. Perception appears, then, as only a choice. It creates nothing; its office, on the contrary, is to eliminate from the totality of images all those on which I can have no hold, and then, from each of those which I retain, all that does not concern the needs of the image I call my body.” Now reread this with an information theoretic lense and you will see I am precisely arguing that perception is not a full encoding of the world, but is made up of compressed histories of interaction (duration)—informational shortcuts shaped by the demands of relevance, memory, and survival. Representation becomes a way of reducing uncertainty just enough to act, and of managing complexity through selective loss or compression.
Compression involves reducing the amount of information needed to represent a signal by removing redundancy, minimizing unpredictability, or foregrounding only what is functionally relevant. In information-theoretic terms, a compressed signal conveys the same usable content as the original, but in a more efficient form. However, compression is never neutral. It encodes the priorities and constraints of the system doing the compressing. What gets discarded, and what gets preserved, depends on the needs and vulnerabilities of the agent. A system under pressure to act cannot afford to process everything; it must filter, simplify, reduce. In doing so, it transforms high-entropy, high-dimensional input into low-entropy, structured form. In this way, compression is not a distortion of reality but the very condition for making sense of it. Raw signals are sifted for what matters: each sensory pattern—sound, shape, smell, touch, taste—is tagged as noise, threat, or opportunity and used in the organism’s working model of the world. Meaning arises not from a full replica of reality but from entropic pruning: from the act of cutting away what does not matter, in order to preserve what does. Representations in this sense are not interpreted but used: their value lies in how effectively they compress sensory input, infer latent causes, and guide action in dynamic environments.
Rather than seeing representation originating in symbolic reference or correspondence, Shannon’s framework positions representation as the compression of a high entropy world into lower entropy patterns. Representation emerges with primitive life through basic mechanisms like chemotaxis, gradually evolving to form complex low-entropy internal models translating high-entropy sensory flux. Symbolic representation, far from being foundational, functions as a late evolving stage in the evolution of representation: a reshaping and further compression of embodied, predictive patterns into forms that are shared and portable across different bodies. *Shannon entropy thus integrates the dynamics of perception and symbolic language within a single informational framework defined by entropic pruning.* Within this framework, the dynamics shaping language and those shaping matter are not merely analogous but fundamentally the same. Entropy here is not a tally of substance but a measure of transformation, and shapes even the symbol-making thermodynamic eddies we call human beings. Human symbolic language, in this view, is not an ontological leap, but an emergent refinement of forms already present in the fabric of the universe. Our most intimate expressions—our thoughts, emotions, and perceptions—do not float free from the material world; they are shaped by constraints and patterns we inherit, not invent. The grammars we speak with are emergent from deeper systems: the neural architectures sculpted by evolution, the sound waves constrained by physical media, the cognitive categories shaped by centuries of cultural sediment. Even the act of communication is embedded in the physical: speech is modulated breath, vibration in air, a ripple through molecules. These processes obey the same thermodynamic laws that govern everything from the decay of stars to the alignment of magnetic fields. This invites a radical reconsideration of “meaning” not as a mysterious surplus added by human minds, but as an emergent thermodynamic artifact: structured order arising temporarily within a universe tilted toward disorder.
For more please read my paper "The Task of the Scientist" (my first new paper in over 100 years!) where I turn to Walter Benjamin’s theory of translation to reframe what it is we think we are doing when we make meaning, whether in the sciences or the humanities. These domains, often treated as distinct in method or aim, can be seen instead as parallel entropic responses: each a mode of compression that translates complexity into form under conditions of uncertainty. Science seeks structural and predictive clarity; the humanities foreground interpretation, ambiguity, and affect—but both arise from embodied systems adapting to a universe in constant flux.
https://bergsonsghost.substack.com/p/the-task-of-the-scientist
I like a lot of this but can you help me understand how Shannon information (digital) relates to analog physiology, and also how it relates to qualitative experience and meaning?
There is a fundamental crossing of wires here. First of all, I completely understand the reticence to boil down something as ineffable as qualia or experience to a mathematical formula. But to frame qualia or experience from a computational or information theoretic standpoint is not to deny that they can be spoken about in an entirely different language. Shakespeare or Plato are not less legitimate ways of speaking about experience. They simply are a different way about speaking about experience. It is similar to someone saying it is so windy that it "could blow a dog of a chain" versus someone telling you the wind speed is "115mph." They both convey information, they just do it in a different way.
By my lights science operates not by conveying meaning, that’s the philosophers job, but by navigating informational patterns or structure with the goal of prediction and control. Meaning is found in relation to struggle. Life shares a universal struggle which is the struggle to replicate and survive. When symbolic language arises we face a new struggle. Which is the structural impossibility of communication. As Shannon reveals, to move from the physical world of affect to the symbolic world of language, structurally requires a reduction in entropy. To transform affect into spoken language is to enter into a perpetual translation. And if translation is the transformation of one form into another, entropy ensures that transformation is never whole or complete. Likewise every scientific model is a fragmentary gesture toward an unreachable whole, a “pure” (ego free) understanding spanning the improbable coherence seeded in the Big Bang and the uniform hush that looms at thermal equilibrium. But this pure understanding is not a destination any single language or theory can reach; it is the thermodynamic gradient between those entropic poles that lends each translation of "universe to ego" its direction and urgency.
All of which is to say that to ask how Shannon information (often assumed to be “digital”) relates to physiology (assumed to be “analog”) is to begin with the wrong opposition. These are not two worlds waiting to be reconciled. They are already part of the same computational architecture: one designed not to simulate the world, but to *act* in it. This is of course the fundamental insight of Matter and Memory, that perception has “an entirely practical destination, simply indicating, in the aggregate of things, that which interests my possible action upon them.”
Shannon’s information theory does not begin with the digital world of machines. It begins with uncertainty. His key idea was to quantify how much uncertainty is reduced when a message is received. That message could be digital or analog, a sound wave or a hormone spike, a phone call or a puff of pheromone. The point is not the medium. The point is that information is the reduction of possibility. It is what makes decision, action, possible.
Think of Shannon entropy as a tool or instrument for measuring uncertainty. The more ways something could go, the more information you gain when it does go a particular way. This holds across signal types. Shannon’s framework is medium-agnostic. It applies to anything, whether biological or artificial, that transmits, stores, or processes information to reduce uncertainty and guide behavior.
So the real question isn’t “How does digital information relate to analog physiology?” It’s: “How do biological systems compute relevance from uncertainty?”
Living systems are not passive receivers of data. They are active filters. Whether a cell, a brain, a body, or an organ each is a dynamic system sculpted by evolution to extract what matters from a noisy, uncertain world. Shannon’s theory gives us the tools to analyze how this works, to track how uncertainty is transformed into usable signal, into action.
Moreover physiology is not purely analog. Yes, many signals are continuous—membrane potentials, hormone levels, blood pressure. But biology routinely discretizes these gradients. Neurons fire in spikes. Genes toggle on and off. Synaptic vesicles are released in quanta. These are analog-to-digital transformations, continuous inputs resolved into categorical decisions. In other words: entropy is pruned. The tree of possible futures is narrowed.
This pruning is not arbitrary. It’s shaped by a fitness function. That is, by the long-term consequences of action for the survival and reproduction of the organism. Information is not merely received. It is evaluated against internal models, expected outcomes, and metabolic costs. Every decision a system makes, whether a single-celled bacterium turning toward glucose or a human weighing career options, is a form of inference under constraints.
The nervous system does not model the world in high resolution for fun. It tags incoming data according to action relevance: threat, opportunity, ambiguity, reward. Shannon’s formula is computed implicitly in each of these cases, not by bits on a screen, but by physiology itself. Every time you make a choice, you are performing an embodied entropy calculation: this or that, stay or go, fight or flee.
In this framework experience, qualia, the texture of “what it’s like” are not static mental snapshots. They are operational tags. Control labels. They’re how the brain, acting as a controller, annotates internal states for action. Pain, for instance, is not simply “red alert.” It is a directive: withdraw, protect, prioritize repair. The feeling is the encoding. The same goes for the perception of color, the feeling of joy, the dread of uncertainty. These are not decorations on top of cognition. They *are* cognition, specifically, the compression of complex internal states into fast-access labels that help prune the search space of future possibilities. You are not merely observing the world. You are filtering it into decision-relevant formats. And you feel those filters from the inside.
The function of qualia, then, is not to mirror the world, but to mark salience for the agent. They allow you to reduce entropy by binding high-dimensional information into tight packets that a controller can use. A mood is not a mystery. It is a global tag, one that shapes the weighting of action pathways without needing to simulate each in detail. The brain doesn’t compute everything. It computes just enough to act.
This is why Shannon’s framework is so powerful: it reveals that information is not just a tally of bits. It is a principle of biological design. To be alive is to constantly reduce entropy in ways that keep you within viable bounds. And to be conscious is to tag that process with a self-model, a running commentary that lets the system revise its own priors, modulate its own behavior, and reflect on its own moves.
So just as I argue in Matter and Memory, meaning is not something added in a late stage of evolution when language emerges. Meaning emerges precisely where entropy meets action. Information becomes meaning when it matters, when it guides you through a branching universe of possibilities toward a narrower path that leads, at least for now, towards a goal. What we call “experience” is the internal surface of this real-time modeling, a set of evolutionary interface labels for a controller operating under pressure, pruning its next move from an uncertain world.
All of which is to say you are not running Shannon’s formula metaphorically. You are implementing it constantly, continuously, adaptively. Every decision, every sensation, every story you tell yourself about who you are, is a function of this deeper task: reducing uncertainty in a universe that statistically is weighted towards increasing it.
I also wanted to be sure to share my appreciation for your interlocutor. Victoria was wonderful to appreciate as well. Keep up the good work 💪
1:23:45-1:24:12
There’s a recurring critique leveled at computational theories of mind: that they fail to account for phenomenology, for the felt texture of experience, for “what it is like.” In other words, computational theories may describe the brain as an information-processing machine, but they cannot explain why a memory feels poignant, or why music can make us weep. At the heart of this objection is not just a philosophical divide, but a deeper misunderstanding: a confusion about what meaning is, and where it comes from.
In computational terms, meaning is defined functionally: it is the reduction of uncertainty in service of action. A signal has meaning to a system if it enables better prediction, more effective coordination, or greater survival. In this framing, meaning is not an essence but a *relation*: a *measure* of how information improves outcomes for a given agent. A bacterium moving up a sugar gradient is engaged in meaning-making: rudimentary, but real. The chemical signal *means* “go this way.”
But this kind of meaning feels alien when compared to phenomenological experience. In the tradition of phenomenology, meaning is not utility, it is significance. It is not that something helps us act, but that it matters. It is the sense of presence, of resonance, of a world disclosed in feeling. Meaning here is not a tool but a horizon. It is what gives experience its depth and direction.
How can these two ideas of meaning, functional and felt, be reconciled?
Via the compression ladder - which offers a descriptive metaphor for how the evolution of life creates new strategies for reducing entropy. Compression here understood in information theoretic terms as the reduction of uncertainty.
At its base, the ladder begins with primitive organisms encoding useful distinctions: light vs. dark, hot vs. cold, edible vs. toxic. Each bit of information *compresses* the chaos of the world into a usable signal. Each compression allows for more efficient response. Over evolutionary time, systems that compress better, e.g. with more relevance to a fitness function, flourish.
But as organisms become more complex, their compression strategies evolve too. They begin not only to react to the present, but to model the past and anticipate the future. Internal states become layered, recursive, and self-modifying. The system doesn't just reduce uncertainty about the world, it reduces uncertainty about itself.
At some point on this ladder, meaning takes on a new form. No longer just input-output optimization, it becomes internal modeling suffused with affect. A song can compress an entire emotional landscape. A face can recall a lifetime. These are not deviations from the computational process, they are its culmination (for now). The “what it is like” emerges when a system has built up enough recursive structure to experience the stakes of its own modeling.
Phenomenological meaning, then, is not an alternative to computational meaning. It is computational meaning from the inside. It is what recursive uncertainty reduction feels like when a system is alive, embodied, and self-aware.
The confusion arises when we treat phenomenology as a primitive, and computation as a sterile abstraction. But the deeper view is this: what we call meaningful "named" experience is not outside the ladder. It is simply higher up in evolutionary terms. For now it is at the top of the compression ladder. Who knows what the next rung up will offer?
If we reinterpret the compression ladder through Whitehead’s lens, each rung is not just a step toward better data efficiency or behavioral control, it is a more complex and intensified concrescence.
At low levels, an organism prehends the world dimly. A paramecium moves toward light: a minimal concrescence of signal and survival.
As organisms evolve richer layers of memory, attention, anticipation, and affect, each act of becoming involves a deeper integration/compression of past, present, and possible futures.
At the human level, an actual occasion (say, remembering a loved one) compresses personal history, emotional resonance, cultural symbols, and sensory triggers into a unified experiential moment understood via symbol manipulation. It is a lower entropy encoding.
Information as a reduction is brilliant. Actually the basis for telecommunications.