How does meaning arise from matter? This is Part 2 of a four-part series exploring how the brain generates meaning. In Part 1, I argued that meaning emerges from relations among neural patterns, evolutionary history, learned associations, and goal-directed action. But that leaves the hardest question unanswered: how do electrical signals in your brain become about apples, dangers, and desires? How does the "aboutness" of meaning emerge from purely physical circuits? Here, in Part 2, we confront the mechanisms directly.
Distributed, Not Localized
For decades, much of neuroscience sought to understand how brains make meaning by looking for specialized neurons or loca…
How does meaning arise from matter? This is Part 2 of a four-part series exploring how the brain generates meaning. In Part 1, I argued that meaning emerges from relations among neural patterns, evolutionary history, learned associations, and goal-directed action. But that leaves the hardest question unanswered: how do electrical signals in your brain become about apples, dangers, and desires? How does the "aboutness" of meaning emerge from purely physical circuits? Here, in Part 2, we confront the mechanisms directly.
Distributed, Not Localized
For decades, much of neuroscience sought to understand how brains make meaning by looking for specialized neurons or localized representations. The classic finding: neurons in the medial temporal lobe—often called “concept cells”—that respond selectively when a person recognizes a specific individual, whether shown in different photos, drawings, or even written names.[1]
These cells do exist. But they can’t explain meaning. Meaning is compositional: you understand "purple elephant" immediately, though no neuron is pre-tuned to purple-elephantness. Meaning is context-dependent: "bank" means different things in "river bank" versus "savings bank." Neuroimaging reveals that semantic processing engages distributed networks spanning frontal, temporal, and parietal cortices.[2]
Pulvermüller’s Four Mechanisms
Neuroscientist Friedemann Pulvermüller has developed a comprehensive neural theory of how meaning arises from brain circuits.[3] His framework identifies four interacting mechanisms:
1. Referential Semantics
Words activate sensory and motor patterns associated with their referents. "Apple" activates visual features (red, round), taste (sweet, tart), and the physical sensation of sinking your teeth into it. These are functional links forged through experience.
When you first learn "apple," you see apples, taste them, bite them, and hear the word. Neurons active during these correlated experiences develop strong connections—a principle called Hebbian learning ("neurons that fire together wire together")—creating distributed networks linking word forms to multimodal experiences.[4]
2. Combinatorial Semantics
But meaning can’t be purely experiential. How do we understand "unicorn" or "democracy"? Syntax provides combinatorial rules for constructing novel meanings from familiar elements.
These aren’t abstract symbolic rules; they’re implemented in the timing and sequencing of neural activation. When processing "the cat chased the mouse," different patterns activate for who’s chasing versus being chased. Grammar is embodied in the temporal dynamics of neural networks.[5]
3. Emotional-Affective Semantics
Many concepts carry affective valence—positive or negative feeling tone—integral to their meaning. Words like "love" and "hate" activate emotional systems. Even supposedly neutral words have subtle emotional colorings.
This connects to Part 1’s argument: meaning is grounded in value. The brain’s evaluation systems aren’t optional add-ons; they’re part of what makes representations meaningful rather than merely informational.[6]
4. Abstract-Symbolic Semantics
Some meaning is genuinely abstract, not reducible to sensory experience. Mathematical concepts and logical relations require mechanisms beyond embodied simulation.
Abstract meanings emerge through higher-order generalization—the brain extracts patterns across concrete experiences, allowing flexible application beyond original contexts.[7]
This framework avoids extremes: pure embodiment (which can’t explain abstraction) and pure symbolic manipulation (which can’t explain grounding—how meanings connect to real-world experience).
Cell Assemblies: The Computational Architecture
Underlying Pulvermüller’s mechanisms is a specific architecture: cell assemblies. These are networks of strongly interconnected neurons spanning multiple brain regions that activate together as functional units.[8]
Neuroscience Essential Reads
Cell assemblies form through Hebbian learning. Like "apple," when you learn "dog," you experience dogs in multiple modalities simultaneously: seeing, hearing, touching, affective responses (fear or affection), and the word “dog.” Neurons active during these correlated experiences develop strong connections. The result: a distributed network that, once formed, can be activated by any component. Thus, hearing the word "dog" also activates visual cortex, and seeing a dog activates auditory patterns. The assembly acts as a functional unit, a neural implementation of a concept.
In mammals, the evaluative aspect of meaning is supported by interactions between distributed cortical representations and subcortical systems involved in affect and action, which assign significance to patterns based on their relevance for the organism’s needs and goals.
Not all learned associations are meaningful. Associative learning strengthens connections whenever neural patterns co-occur, generating many assemblies—including arbitrary or maladaptive habits. Only those assemblies that become stable, evaluatively grounded, and recruited into broader patterns of goal-directed behavior acquire semantic significance.[9]
In summary, cell assemblies are:
- Distributed: Spanning sensory, motor, and association cortices.
- Multimodal: Integrating information across sensory modalities.
- Context-sensitive: Different subsets activate depending on current context.
- Plastic: Continuously refined through experience.
This architecture accounts for how meaning can be both grounded and flexible: neural activity is not intrinsically about the world, but acquires its “aboutness” through patterns of activity distributed across many neural units, shaped by learning and used to guide action. Just as no single pixel contains an image, no single neuron contains a concept.
Semantic Pointers
These distributed cell assemblies function as what researchers call "semantic pointers," compact neural patterns that can activate full schemas of objects stored across brain regions.[10] When you hear "apple," a relatively small pattern of neural activity in auditory cortex triggers the distributed network: visual features, taste, motor actions, affective associations.
The pointer itself is arbitrary, just a particular firing pattern. But through learning, it’s linked to everything the organism knows about apples. This solves a computational problem: you can think about an apple without maintaining all its properties in working memory simultaneously. The pointer stands in for the full concept, enabling efficient operations while maintaining access to rich detail when needed.
Bowtie Structures
Both cell assemblies and semantic pointers exemplify a broader architectural principle. Recent work suggests meaning emerges through “bow-tie architectures,” processing structures where many inputs funnel through a narrow intermediate layer before expanding to diverse outputs.[11,12]
When your brain processes "apple," thousands of sensory neurons feed into progressively smaller integration zones—a distillation process that creates stable representations—which then project to motor, memory, and evaluation systems. The intermediate "neck" creates stable representations that can be compared and evaluated.[11,12]
This maps onto Dehaene’s global neuronal workspace theory: distributed processors compete for access to a limited-capacity integration zone where representations become globally available—the architecture that implements the flexible learning discussed in Part 1.[12,13] While semantic processing can occur nonconsciously, this architectural constraint enables the flexible, reportable deployment of meaning that characterizes conscious awareness.
Semantic Hubs
While meaning is distributed, some regions serve as convergence zones. The anterior temporal lobes (ATL) appear particularly important.[14]
Patients with ATL damage develop semantic dementia: they progressively lose concept knowledge while retaining perception and basic motor skills. Current evidence suggests the ATL doesn’t store meanings directly but serves as a convergence zone binding distributed semantic features into coherent wholes, while prefrontal and temporal systems govern contextual deployment.[15]
Development and Plasticity
Unlike computers with fixed semantic databases, brains acquire meanings through lifelong, experience-dependent learning.
Patricia Kuhl’s work shows infants’ brains are initially responsive to phonetic contrasts from all languages but become selectively tuned to their native language through statistical learning.[16] The same principle applies to meaning: through repeated exposure, neural networks become selectively responsive to meaning-relevant features.
This explains why meaning is grounded but not imprisoned in early experience. New experiences reshape networks, allowing concepts to evolve.
Why This Matters
Semantic processing isn’t merely computation. For conscious organisms, understanding feels like something. Meaning becomes experiential at the circuit level when distributed neural representations are integrated across perception, memory, action, and value. When this happens, representations are available not just for control but for evaluation and report. Subjective meaning, on this view, is not added on top of neural processing; it is what globally integrated, value-laden representations feel like from the inside—when information is not just used, but experienced.
These neural mechanisms explain how individual brains construct meaning from bodily experience and goal-directed action. But human meaning is also fundamentally social and cultural. In Part 3, we’ll explore how private neural representations become symbolic and shared through language—how meaning transcends individual minds to create public systems of communication.
References
1. Rodrigo Quian Quiroga, Leila Reddy, Gabriel Kreiman, Christof Koch, and Itzhak Fried, “Invariant Visual Representation by Single Neurons in the Human Brain,” Nature 435 (2005): 1102–1107. https://doi.org/10.1038/nature03687
2. Jeffrey R. Binder, Rutvik H. Desai, William W. Graves, and Lisa L. Conant, “Where Is the Semantic System? A Critical Review and Meta-Analysis of 120 Functional Neuroimaging Studies,” Cerebral Cortex 19, no. 12 (2009): 2767–2796. https://doi.org/10.1093/cercor/bhp055
3. Friedemann Pulvermüller, “How Neurons Make Meaning: Brain Mechanisms for Embodied and Abstract-Symbolic Semantics,” Trends in Cognitive Sciences 17, no. 9 (2013): 458–470. https://doi.org/10.1016/j.tics.2013.06.004. See also my detailed explanation of Pulvermüller’s framework in Footnote 12 of my post In a Meaningless Universe, Where Does Meaning Come From?
4. Hebbian learning is the strengthening of connections between co-active neurons. Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory (New York: Wiley, 1949).
5. Friedemann Pulvermüller and Luciano Fadiga, “Active Perception: Sensorimotor Circuits as a Cortical Basis for Language,” Nature Reviews Neuroscience 11, no. 5 (2010): 351–360. https://doi.org/10.1038/nrn2811.
6. Jaak Panksepp, “Affective Consciousness: Core Emotional Feelings in Animals and Humans,” Consciousness and Cognition 14, no. 1 (2005): 30–80, https://doi.org/10.1016/j.concog.2004.10.004; Antonio R. Damasio and Gil B. Carvalho, “The Nature of Feelings: Evolutionary and Neurobiological Origins,” Nature Reviews Neuroscience 14, no. 2 (2013): 143–152, https://doi.org/10.1038/nrn3403.
7. Matthew A. Lambon Ralph, Elizabeth Jefferies, Karalyn Patterson, and Timothy T. Rogers, “The Neural and Computational Bases of Semantic Cognition,” Nature Reviews Neuroscience 18, no. 1 (2017): 42–55. https://doi.org/10.1038/nrn.2016.150.
8. Hebb, Organization of Behavior; Friedemann Pulvermüller, “Neural Reuse of Action Perception Circuits for Language, Concepts and Communication,” Progress in Neurobiology 160 (2018): 1–44. https://doi.org/10.1016/j.pneurobio.2017.07.001.
9. Gaurav Suri and James L. McClelland, The Emergent Mind: How Intelligence Arises in People and Machines (New York: Basic Books, 2024).
10. Peter Blouw, Elan Solodkin, Paul Thagard, and Chris Eliasmith, “Concepts as Semantic Pointers: A Framework and Computational Model,” Cognitive Science 40, no. 5 (2016): 1128–1162, https://doi.org/10.1111/cogs.12265.
11. Mária Csete and John Doyle, “Bow Ties, Metabolism and Disease,” Trends in Biotechnology 22, no. 9 (2004): 446–450, https://doi.org/10.1016/j.tibtech.2004.07.007; Tal Friedlander, Assaf E. Mayo, Tsvi Tlusty, and Uri Alon, “Evolution of Bow-Tie Architectures in Biology,” PLoS Computational Biology 11, no. 3 (2015): e1004055. https://doi.org/10.1371/journal.pcbi.1004055.
12. Eva Jablonka and Simona Ginsburg, “Consciousness: Its Goals, Its Functions and the Emergence of a New Category of Selection,” Philosophical Transactions of the Royal Society B 380 (2025): art. 20240310. https://doi.org/10.1098/rstb.2024.0310.
13. Stanislas Dehaene and Jean-Pierre Changeux, “Experimental and Theoretical Approaches to Conscious Processing,” Neuron 70, no. 2 (2011): 200–227. https://doi.org/10.1016/j.neuron.2011.03.018.
14. Karalyn Patterson, Peter J. Nestor, and Timothy T. Rogers, “Where Do You Know What You Know? The Representation of Semantic Knowledge in the Human Brain,” Nature Reviews Neuroscience 8, no. 12 (2007): 976–987. https://doi.org/10.1038/nrn2277.
15. Lambon Ralph et al., “Neural and Computational Bases of Semantic Cognition.”
16. Patricia K. Kuhl, “Early Language Acquisition: Cracking the Speech Code,” Nature Reviews Neuroscience 5, no. 11 (2004): 831–843. https://doi.org/10.1038/nrn1533.