Neuromorphic intelligence represents a fundamental shift in how we think about artificial intelligence. The deep learning revolution has given us unprecedented advancements, from generative AI with powerful reasoning capabilities to scientific breakthroughs in medicine and climate science.
But this progress has come at a steep price. In a recent paper, published in ArXiv, author Marcel van Gerven points out the “considerable drawbacks” of our current path. Training and running these massive AI models consumes “vast amounts of natural resources and results in a substantial carbon footprint”.
This resource-heavy approach is “unsustainable in a resource-bounded world”. Furthermore, it creates a high barrier to entry, concentrating power in the h…
Neuromorphic intelligence represents a fundamental shift in how we think about artificial intelligence. The deep learning revolution has given us unprecedented advancements, from generative AI with powerful reasoning capabilities to scientific breakthroughs in medicine and climate science.
But this progress has come at a steep price. In a recent paper, published in ArXiv, author Marcel van Gerven points out the “considerable drawbacks” of our current path. Training and running these massive AI models consumes “vast amounts of natural resources and results in a substantial carbon footprint”.
This resource-heavy approach is “unsustainable in a resource-bounded world”. Furthermore, it creates a high barrier to entry, concentrating power in the hands of a few large companies and leading to increasingly “opaque” systems that are difficult to understand.
We are facing a computational wall. The paper argues that this reliance on conventional digital approaches suffers from the Von Neumann bottleneck, an inefficiency in how data is moved between memory and a processor. We need a different path, and van Gerven suggests we look to the most efficient computer ever made: the human brain.
The 20-Watt Miracle
The human brain is a marvel of efficiency. It is estimated to perform incredibly complex computations “while consuming only 20 W of power”. That is just enough to power a dim lightbulb.
Contrast that with our most powerful conventional computers. The El Capitan supercomputer, for example, consumes 30 MW of power, “which is sufficient to power a small town”. The brain is orders of magnitude more efficient.
This is the inspiration for neuromorphic computing, which “seeks to replicate the remarkable efficiency, flexibility, and adaptability of the human brain in artificial systems“. It is an approach that embraces “brain-inspired principles of computation to achieve orders of magnitude greater energy efficiency”.
Instead of just building bigger digital computers, this field aims to build systems that function like a brain. This involves a multidisciplinary effort, drawing insights from neuroscience, physics, materials science, and AI. But to unite these fields, van Gerven notes, we need a “unifying theoretical framework”.
A New Foundation for Neuromorphic Intelligence
The paper argues that the foundation for true neuromorphic intelligence is Dynamical Systems Theory (DST).
This may sound complex, but the idea is intuitive. Current AI is often thought of as a static flowchart, a set of instructions. DST, however, is the language of change and evolution. It uses “differential calculus” to describe a system’s behavior over time, much like how one would model the flow of a river or the weather.
In this view, intelligence is not a pre-programmed algorithm. It is an emergent behavior that arises from the system’s “equations of motion”.
This approach completely blurs the line between hardware and software. The paper highlights the concept of “in-materia computing,” where the physical substrate is the computer. The computation happens within the physics of the device itself, just as your thoughts emerge from the physical and chemical dynamics of your brain.
Learning From the Noise
This new foundation requires a new way to learn. The engine behind today’s deep learning is an algorithm called backpropagation. While powerful, van Gerven argues it is “impossible to implement in a physically realistic manner”. Backpropagation requires computations to “move against the arrow of time”, like trying to un-bake a cake to learn the recipe. A physical system simply cannot work that way.
A neuromorphic system, in contrast, must learn online, in real-time, using only local information. The paper suggests a radical source for this learning: noise.
In digital computing, noise is the enemy, a source of error to be eliminated. But in a dynamical system, it can be “harnessed as a resource for learning”. Van Gerven describes these methods as viewing noise “as a feature rather than a bug”.
The paper details a method called Ornstein-Uhlenbeck adaptation (OUA) as one example. In this model, the system is constantly “jiggling” its own parameters. When a random jiggle happens to improve performance and leads to a reward, the system adapts to make that change more permanent. Learning “emerges from evaluating the equations of motion of the augmented system”. It is a continuous, adaptive process, much like life itself.
Evolving Intelligence
This dynamical systems approach allows us to learn during an agent’s “lifetime.” But it also opens a door to an even more powerful idea: evolving intelligence across generations.
Van Gerven introduces Differential Genetic Programming (DGP), an approach to “evolve populations of agents”. It is a form of digital survival of the fittest for AI.
This process starts with a “population” of simple dynamical systems. These agents are tested on a task, and their “fitness” is evaluated. The most successful agents are “preferentially selected for reproduction”. Their underlying equations are “mixed” (crossover) and “tweaked” (mutation) to create a new generation of agents.
This “truly end-to-end approach” can discover solutions that a human designer would never think of. The paper suggests this process could even “evolve the learning mechanism itself”.
Why This Matters
The vision laid out in the paper is a paradigm shift that could solve the three biggest problems in AI today.
First, it addresses sustainability. By mimicking the brain’s efficiency, neuromorphic computing offers a path away from the “unsustainable” energy consumption of modern AI, promising systems that are “sustainable, transparent, and widely accessible“.
Second, it tackles the “black box” problem. Today’s “opaque” models are difficult to interpret. But the evolutionary DGP approach can produce symbolic expressions, or equations, that we can actually read and understand, leading to more transparent and explainable AI.
Finally, this work provides a “bridge between natural and artificial intelligence”. It moves the field beyond pure engineering and back to the “fundamental question” that inspired AI in the first place: “how does mind emerge from matter?”. By embracing the noisy, complex, and emergent dynamics of the physical world, we may finally begin to find the answer.