This post is Part 1 of a series.
In an earlier post, I explored how meaning might arise in a physical, meaningless universe—drawing in part on physicist Carlo Rovelli’s relational account, which treats meaning as emerging when physical correlations acquire evolutionary significance.[1] But that post left largely unexplored how this actually happens in brains. How do electrical signals come to be about something? How does significance arise from circuitry?
This four-part series explores how the brain generates meaning, tracing how meaning emerges in living systems—from biological value and goal-directedness (Part 1), through the neural representations that gu…
This post is Part 1 of a series.
In an earlier post, I explored how meaning might arise in a physical, meaningless universe—drawing in part on physicist Carlo Rovelli’s relational account, which treats meaning as emerging when physical correlations acquire evolutionary significance.[1] But that post left largely unexplored how this actually happens in brains. How do electrical signals come to be about something? How does significance arise from circuitry?
This four-part series explores how the brain generates meaning, tracing how meaning emerges in living systems—from biological value and goal-directedness (Part 1), through the neural representations that guide action (Part 2), to shared symbols grounded in social cognition (Part 3), and finally to the cultural institutions and personal narratives that give meaning its richest human forms (Part 4).
The Gap Between Pattern and Purpose
Physical systems exhibit patterns—molecular arrangements, light wavelengths, temperature distributions, etc.—that we can describe in informational terms.
Claude Shannon’s information theory, developed in the 1940s for telecommunications, formalizes informational description by treating unpredictability as the measure of a signal. Predictable patterns (like “AAAAA”) contain little Shannon information because you already know what’s coming. Random patterns (like “XQJKZPM”) contain maximal Shannon information because every letter is unpredictable. Yet random strings mean nothing—they carry no semantic content. Shannon information says nothing about meaning.[2]
But meaning clearly exists for organisms with brains. A scent can signal food or danger to an animal. The brain’s representation of that scent is about something in the world. Philosophers refer to this property as “aboutness,” or intentionality. It arises when living systems register environmental patterns in relation to their own needs, capacities, and stakes in survival.
Meaning Is Fundamentally Relational
Meaning, however, exists not in neural patterns alone but in relationships between those patterns, the organism’s evolutionary history, its current goals, and the environment it navigates. A pattern of neural firing becomes meaningful through how it was shaped by natural selection, how it’s been tuned by the organism’s individual learning, and how it’s currently being used to guide behavior.
Consider place cells in a mouse’s hippocampus. When the mouse occupies a specific location, particular neurons fire. That pattern represents location because evolution favored spatial tracking, learning refined it through experience, and downstream circuits use it to guide navigation.
The meaning isn’t in the firing pattern itself but in its web of functional and evolutionary relations.[3] But this raises a deeper question: What makes these relations matter for the organism in the first place?
Value: The Missing Ingredient
Living systems must maintain themselves against thermodynamic decay. This creates intrinsic goals (that is, biologically grounded needs and action tendencies): to maintain viability and reproduce. As the neuroanthropologist Terrence Deacon argues, this organizational vulnerability gives rise to genuine teleology: Systems that can fail have goals, and goals create value.[4]
This is where a semiotic framework becomes useful (the study of signs and how they acquire meaning): it distinguishes correlations that merely occur from those that function as signs for an organism. Signs, in this sense, are correlations that an organism interprets and uses relative to its goals.[5]
From Directive to Descriptive
Early in evolution, meaning-bearing signs take the form of simple biological signals—internal states that primarily control action rather than describing the world. When a bacterium detects a toxin, the internal signal doesn’t represent “Dangerous chemical X is present.” It functions, in effect, as “Move!”—a pragmatic control signal in neuroscientist and geneticist Kevin Mitchell’s sense, guiding behavior directly rather than encoding an explicit description of the world.[3]
Consciousness Essential Reads
But as nervous systems evolved to process long-range senses like vision, something changed: Directive signals were increasingly supplemented by descriptive models of the world. You can’t *directly *detect objects—only photons striking the retina—so additional processing evolved to infer objects from light patterns. This produced internal representations of world states rather than mere action commands. Crucially, these representations were decoupled from obligatory action and could be held “in mind,” compared, and evaluated before guiding behavior.
Predictive Brains and Valued Predictions
Rather than passively receiving input, brains continuously generate expectations shaped by prior experience and goals, updating them when predictions err.[6] When your visual cortex represents an apple, that representation is meaningful because it predicts features relevant to eating and action—sweetness, texture, and graspability. These predictions aren’t neutral; they’re saturated with value, and the brain doesn’t predict all features equally. Prediction errors drive learning because they signal that something relevant to action went differently than expected[7]—for example, when an apple that looks ripe turns out to be sour or inedible.
Artificial Intelligence and Meaning
Could any sufficiently complex computer generate meaning by instantiating patterns and predictions like those found in brains? Not as computers are currently designed. Computers can instantiate patterns and predictions, but meaning emerges only in systems with intrinsic goals—systems for which outcomes genuinely matter. When a chess program evaluates positions, nothing matters to the program itself. When a brain generates prediction errors, something genuinely matters: The organism is navigating toward self-maintenance and reproduction, ends that are inherent in its organization as a living system.[3,4] Whether artificial systems could develop genuine meaning for themselves remains an open question, but it would require them to have stakes in their own continued existence.[8]
Consciousness and the Evaluation of Meaning
Neurobiologist Simona Ginsburg and evolutionary biologist Eva Jablonka propose a key evolutionary threshold: Unlimited Associative Learning (UAL), the ability to form flexible compound associations between arbitrary stimuli and value outcomes and to use these associations across contexts. This allows an organism to hold multiple representations "in mind" simultaneously, compare them, and choose among them based on learned values.[5]
Before the evolution of UAL, organisms tend to respond to stimuli reflexively. After UAL, they can evaluate alternative responses before acting. This transforms the adaptive landscape.
Consider a pre-UAL animal encountering food near a predator. Fixed responses dominate: Approach food, flee predator. But with UAL, the animal can represent both possibilities, weigh relative values, and choose. Representations become objects of evaluation.
Before UAL, organisms respond without clear evidence of felt experience. After UAL, behavior suggests conscious awareness. Ginsburg, Jablonka, and their coauthor, philosopher of biology Jonathan Birch, argue that once animals can flexibly learn and compare options, their internal states are no longer just control signals—they feel like something. Consciousness isn’t something added later—it emerges with UAL itself. The functional processes that enable flexible learning don’t just correlate with consciousness; they constitute it.[5]
How this works mechanistically remains incompletely understood. Some theorists suggest consciousness emerges when meaning becomes recursive (thinking about thinking), when organisms can represent their own representations, creating what’s been called a "global workspace" where representations can be compared and unified.[9]
How distributed neural processes create unified subjective experience remains incompletely understood. What’s clear is that for conscious organisms like us, meaning is always experienced, not just enacted. Consciousness may be what goal-directed interpretation and evaluation feel like from the inside.
Evolutionary Transitions in Meaning
Jablonka and Ginsburg identify major evolutionary transitions in how goals, values, and meaning operate:
- Nonconscious to conscious: The emergence of UAL enabled flexible learning, evaluative comparison, and subjective experience.
- Nonlinguistic to linguistic: The emergence of symbolic cognition allowed meanings to be shared, preserved, and transformed across generations.
Each transition introduced new forms of goals and values, reshaping the targets and dynamics of selection. The transition from nonconscious to conscious processing—the shift from neural to mental—is particularly consequential: Once organisms could consciously evaluate competing representations, selection began to operate not only on behavior but also on representations themselves—what Jablonka and Ginsburg call mental selection.[10]
The Trajectory of Meaning
We’ve now traced meaning from its origin in goal-directed life to its emergence as something organisms can consciously evaluate.
In Part 2, we’ll examine how neural circuits give rise to semantic content and support the flexible use of meaning in perception, thought, and action.
References
1. Ralph Lewis, “In a Meaningless Universe, Where Does Meaning Come From?,” Psychology Today, March 9, 2023, https://www.psychologytoday.com/us/blog/finding-purpose/202303/in-a-mea…; Carlo Rovelli, “Meaning = Information + Evolution,” in Wandering Towards a Goal: How Can Mindless Matter Become Purposeful?, ed. Adam Frank, Marcelo Gleiser, and Evan Thompson (Cham, Switzerland: Springer, 2018), 17–27.
2. Claude E. Shannon, “A Mathematical Theory of Communication,” Bell System Technical Journal 27 (1948): 379–423, http://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x.
3. Kevin J. Mitchell, “The Origins of Meaning: From Pragmatic Control Signals to Semantic Representations,” preprint, PsyArXiv, 2023, https://osf.io/preprints/psyarxiv/dfkrv_v1.
4. Terrence W. Deacon, Incomplete Nature: How Mind Emerged From Matter (New York: W. W. Norton & Company, 2011).
5. Jonathan Birch, Simona Ginsburg, and Eva Jablonka, "Unlimited Associative Learning and the Origins of Consciousness: A Primer and Some Predictions," Biology & Philosophy 35 (2020): article 56, https://doi.org/10.1007/s10539-020-09772-0; Eva Jablonka and Simona Ginsburg, "Learning and the Evolution of Conscious Agents," Biosemiotics 15 (2022): 401–437, https://doi.org/10.1007/s12304-022-09501-y.
6. Andy Clark, “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science,” Behavioral and Brain Sciences 36, no. 3 (2013): 181–204, https://doi.org/10.1017/S0140525X12000477.
7. Anil K. Seth, Being You: A New Science of Consciousness (New York: Dutton, 2021); Karl Friston, “The Free-Energy Principle: A Unified Brain Theory?,” Nature Reviews Neuroscience 11, no. 2 (2010): 127–138.
8. There are many ways in which artificial intelligence (AI) systems fall short of human cognition. Acutely mindful of this in developing this blog series, AI tools were cautiously used for research support, idea generation, and assistance with phrasing and clarity, but all analysis, arguments, and interpretations are the author’s own, and the final prose reflects the author’s voice and expertise.
9. Stanislas Dehaene and Jean-Pierre Changeux, “Experimental and Theoretical Approaches to Conscious Processing,” Neuron 70, no. 2 (2011): 200–227, https://doi.org/10.1016/j.neuron.2011.03.018.
10. In mental selection, internal representations compete for influence over behavior, and those that better guide action are preferentially retained and reused through learning and experience. Eva Jablonka and Simona Ginsburg, “Consciousness: Its Goals, Its Functions and the Emergence of a New Category of Selection,” Philosophical Transactions of the Royal Society B 380 (2025): art. 20240310, https://doi.org/10.1098/rstb.2024.0310.