“What can you do with 1,000 neurons?” That’s the challenge driving a competition launched in July by computational neuroscientist Nicolas Rougier. Competitors score points by designing model brains to solve a series of simple tasks inside a maze; the challenge comes from the constraints of using only 1,000 neurons, as well as a training phase of less than 100 seconds in real time and a testing phase comprising only 10 attempts.
Rougier’s competition, dubbed “Braincraft,” moves in a different direction from recent trends in generative artificial intelligence. Commercial large language models have trillions of parameters—and they cost millions of dollars in electricity, processing power and water-cooling to …
“What can you do with 1,000 neurons?” That’s the challenge driving a competition launched in July by computational neuroscientist Nicolas Rougier. Competitors score points by designing model brains to solve a series of simple tasks inside a maze; the challenge comes from the constraints of using only 1,000 neurons, as well as a training phase of less than 100 seconds in real time and a testing phase comprising only 10 attempts.
Rougier’s competition, dubbed “Braincraft,” moves in a different direction from recent trends in generative artificial intelligence. Commercial large language models have trillions of parameters—and they cost millions of dollars in electricity, processing power and water-cooling to train. Rougier’s focus on small models, by contrast, means that anyone with a laptop (and 100 seconds) can take part.
Rougier’s constraints are inspired by evolution. Lives are short, and brains are energetically costly—something like 20 percent of each individual human’s calories go to maintaining the brain. How to most efficiently derive intelligent behavior from limited energy and limited experience is a defining challenge of biology. “Even LLM models with trillions of parameters could not survive in the real world if you were to provide them with a robotic body,” Rougier says. “In the meantime, the Caenorhabditis elegans, with only 302 neurons, can live a perfect life (of a nematode) in the real world.”
In an era dominated by vast AI models, which bear only the most superficial resemblance to real brains, the Braincraft challenge looks back to nature and asks researchers to put their knowledge of how the brain works to the test. What excites me about the competition is that it helps us explore answers that will be relevant both for understanding the evolution of real brains and for designing more efficient AI.
C
ompetitions of this type have a long history in science. The 1980 “computer tournament” challenged researchers to submit strategies to play the “prisoner’s dilemma” against one another. The winner, surprisingly at the time, was a simple strategy of repeating your opponent’s previous move (“tit for tat”). The results of that competition inspired organizer Robert Axelrod to write his book “The Evolution of Cooperation,” which continues to inform our understanding of evolution.
More recently, a competition called ImageNet galvanized the computer-vision community to compete on image recognition, which has had huge gains during the past decade. In the field of protein-folding, Google DeepMind’s AlphaFold made headlines in 2020 with its success in the CASP competition, which arguably augured the current era of AI.
Rougier was inspired to launch his own competition by a “growing frustration” with the direction of computational neuroscience. “We’ve accumulated an amazing number of models for this or that part of the brain, including cortex, hippocampus, basal ganglia, and yet we do not have a definitive model of any of these structures; we may have something like 1,000 models of V1, but none of them can see,” he says. “The reason lies probably in the way we’re doing modeling, targeting very specific parts without necessarily considering the whole picture.” The competition takes a different tack, requiring entries that combine perception, decision and action in a simple model.
This logic echoes that of one of the classic papers in cognitive science, “You can’t play 20 questions with nature and win,” published more than 50 years ago by Allen Newell. In that paper, Newell argued that progress would never come from studying individual functions but only from building models that could perform a variety of behaviors. Back in the 1970s, the more radical part of his proposal may have been the emphasis on formal, computational models, but now perhaps it is the emphasis on understanding complete functions. Neuroscience has increasingly specialized in specific areas, model species and functions, adding to the complexity and heterogeneity of our map of the brain.
Rougier hopes that competitions such as his will help put neuroscience back together again. His emphasizes model efficiency. By limiting the number of neurons and the training time, the competition forces winning models to use limited resources more intelligently, rather than wringing performance gains from increased size.
The competition comprises five tasks. As of November, competitors have attempted the first task, and entries are open for the second. The first task asked participants to design a model brain to find a food source located at one of two possible locations inside a maze. The winner used handcrafted weights—fixing specific values for the connections between the sensors and the actions—and just 22 neurons, something that won’t be possible for subsequent, more complex tasks. Third place went to a genetic algorithm, which found an inefficient but ultimately effective strategy of blindly circling the maze. These early results show that simple models employing different approaches can successfully perform simple tasks. But as the competition progresses to tasks that require a broader range of decisions, model-builders will need to explore different strategies for success while keeping their models small.
The competition requires models that can learn to perform complete tasks in an environment, preventing a narrow focus on abstract functions such as visual recognition. Enforcing a limit on training time and model complexity means competitors can’t get complex behavior just by adding model complexity; they have to engage with resource limitations, just as resource limitations were a dominant constraint during the evolution of real brains. Finally, Rougier’s challenge asks people to work on the same problems in a comparable way. Competitors from different theoretical perspectives, or those who favor different modeling approaches, are forced to put their models into a direct comparison.
I’m optimistic that there is a lot that can be learned from this competition, and other neuroscientists share my optimism. “I like the idea of competitions a lot. It provides an opportunity for many people to simultaneously tackle the same problem, subject to the same constraints,” says Anne Churchland, professor of neurobiology at the University of California, Los Angeles. “This is sure to lead to interesting insights.”
N
ot everyone agrees. Mark Humphries, professor of computational neuroscience at the University of Nottingham, says the competition has a problem with both the format and the alignment of the scientific and competition goals. He is enthusiastic about the idea of competitions in general for driving science forward, citing the success of Axelrod’s competition in the ’80s and more recent image-classification and protein-folding competitions. These examples, he says, offer a formula for the kind of competition that will produce scientific insights. “Successful competitions are accessible to as many people as possible, to bring in a wide range of expertise,” he says. “They have a clear performance target linked to a clear technical goal and repeat so that success and engagement can accumulate.”
Rougier’s competition does have an access bar; competitors must be experienced with both Python and GitHub, as well as have a background in systems neuroscience and neural network modeling, and they have to come to grips with the interface for the competition. A high bar, but perhaps not too rare in the computational neuroscience community.
Beyond that, though, Humphries argues, a scientifically productive competition requires a clear alignment between the scientific goal and the competition task. The image-classification and protein-folding competitions had intrinsically meaningful outcomes—if an algorithm could successfully classify images or predict protein-folding, then the value was undeniable. The 1,000 neuron challenge uses artificial tasks. It is less clear what we’ll learn from the most successful strategies.
A lot rides on whether Rougier has found a sweet spot between the simplicity and artificiality of Axelrod’s competition and the complex and meaningful challenges of recent computer science. Too simplistic a competition would mean that the winner tells us nothing about how real brains solve the challenge of efficiency. Too complex and it will be hard to recruit competitors, and possibly also hard to derive general principles from specific models. Only as the five planned tasks progress will it become clearer if the competition has struck the right balance.
At this point, we also don’t know if the most important thing we’ll learn from the competition is something about the general principles for how to build efficient brains or about how to better design scientific competitions. For certain, though, many—like me—will find the challenge inspiring.