- Open Access
 
David G. Clark1,2,*,‡, Owen Marschall1, Alexander van Meegen3,§, and Ashok Litwin-Kumar1,2,†
- 
1Zuckerman Institute, Columbia University, New York, New York 10027, USA
 - 
2Kavli Institute for Brain Science, Columbia University, New York, New York 10027, USA
 - 
3Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
 - 
*Contact author: dgclark@fas.harvard.edu
 - 
†Contact author: a.litwin-kuma…
 
- Open Access
 
David G. Clark1,2,*,‡, Owen Marschall1, Alexander van Meegen3,§, and Ashok Litwin-Kumar1,2,†
- 
1Zuckerman Institute, Columbia University, New York, New York 10027, USA
 - 
2Kavli Institute for Brain Science, Columbia University, New York, New York 10027, USA
 - 
3Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
 - 
*Contact author: dgclark@fas.harvard.edu
 - 
†Contact author: a.litwin-kumar@columbia.edu
 - 
‡Present address: Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, Massachusetts 02138, USA.
 - 
§Present address: School of Life Sciences and School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland.
 
Abstract
Studies of the dynamics of nonlinear recurrent neural networks often assume independent and identically distributed couplings, but large-scale connectomics data indicate that biological neural circuits exhibit markedly different connectivity properties. These include rapidly decaying singular-value spectra and structured singular-vector overlaps. Here, we develop a theory to analyze how these forms of structure shape high-dimensional collective activity in nonlinear recurrent neural networks. We first introduce the random-mode model, a random-matrix ensemble related to the singular-value decomposition that enables control over the spectrum and right-left mode overlaps. Then, using a novel path-integral calculation, we derive analytical expressions that reveal how connectivity structure affects features of collective dynamics: the dimension of activity, which quantifies the number of high-variance collective-activity fluctuations, and the temporal correlations that characterize the timescales of these fluctuations. We show that connectivity structure can be invisible in single-neuron activities, while dramatically shaping collective activity. Furthermore, despite the nonlinear, high-dimensional nature of these networks, the dimension of activity depends on just two connectivity parameters—the variance of the couplings and the effective rank of the coupling matrix, which quantifies the number of dominant rank-one connectivity components. We contrast the effects of single-neuron heterogeneity and low-dimensional connectivity, making predictions about how z-scoring data affects the dimension of activity. Finally, we demonstrate the presence of structured overlaps between left and right modes in the Drosophila connectome, incorporate them into the theory, and show how they further shape collective dynamics.
- Network structure
 - Neuronal dynamics
 - Neuronal network activity
 - Neuroscience, neural computation & artificial intelligence
 - Cavity methods
 - Dynamical mean field theory
 - Neural network simulations
 - Path-integral methods
 
Popular Summary
Understanding how patterns of connections in the brain shape neural activity is a major challenge in neuroscience. Brain networks can be mathematically broken down into input and output modes, and recent studies show that actual neural circuits are far from random, dominated by a small number of strong components. At the same time, new technologies allow us to record the activity of thousands of neurons at once, giving us a chance to directly link connectivity to collective neural dynamics. In this study, we show that the structure of connectivity can predict key features of network activity, including the number of independent activity patterns the network can generate (the activity dimension).
To achieve this, we develop a mathematical framework that models networks as collections of input and output modes with adjustable strengths. Using this approach, we derive how the connectivity structure determines the activity dimension and other collective properties. Analysis of our theory suggests that large-scale data can reveal relationships between network connectivity and the activity of thousands of neurons simultaneously.
Our results show that even when observing individual neurons, the underlying connectivity may appear hidden, but population-level recordings reveal its influence. In particular, just two measures—the typical connection strength and the number of dominant components—largely determine the activity dimension in uncorrelated modes. The framework also captures additional constraints when input-output modes are correlated, as seen in the fruit fly connectome.
This work provides a powerful tool to link detailed structural maps of neural circuits with their emergent activity patterns, offering a foundation for future studies on how connectivity shapes computation in the brain.
Article Text
References (105)
- O. Barak, D. Sussillo, R. Romo, M. Tsodyks, and L. F. Abbott, From fixed points to chaos: Three models of delayed discrimination, Prog. Neurobiol. 103, 214 (2013).
 - V. Mante, D. Sussillo, K. V. Shenoy, and W. T. Newsome, Context-dependent computation by recurrent dynamics in prefrontal cortex, Nature (London) 503, 78 (2013).
 - D. Sussillo, M. M. Churchland, M. T. Kaufman, and K. V. Shenoy, A neural network that finds a naturalistic solution for the production of muscle activity, Nat. Neurosci. 18, 1025 (2015).
 - S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural Comput. 9, 1735 (1997).
 - I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to sequence learning with neural networks, Adv. Neural Inf. Process. Syst. 27, 3104 (2014).
 - K. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, Learning phrase representations using RNN encoder–decoder for statistical machine translation, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), edited by A. Moschitti, B. Pang, and W. Daelemans (Association for Computational Linguistics, Doha, Qatar, 2014), pp. 1724–1734.
 - K. Krishnamurthy, T. Can, and D. J. Schwab, Theory of gating in recurrent neural networks, Phys. Rev. X 12, 011011 (2022).
 - K. Rajan, L. F. Abbott, and H. Sompolinsky, Stimulus-dependent suppression of chaos in recurrent neural networks, Phys. Rev. E 82, 011903 (2010).
 - R. Engelken, F. Wolf, and L. F. Abbott, Lyapunov spectra of chaotic recurrent neural networks, Phys. Rev. Res. 5, 043044 (2023).
 - H. Sompolinsky, A. Crisanti, and H.-J. Sommers, Chaos in random neural networks, Phys. Rev. Lett. 61, 259 (1988).
 - D. Martí, N. Brunel, and S. Ostojic, Correlations between synapses in pairs of neurons slow down dynamics in randomly connected neural networks, Phys. Rev. E 97, 062314 (2018).
 - D. Sussillo and L. F. Abbott, Generating coherent patterns of activity from chaotic neural networks, Neuron 63, 544 (2009).
 - R. Laje and D. V. Buonomano, Robust timing and motor patterns by taming chaos in recurrent neural networks, Nat. Neurosci. 16, 925 (2013).
 - J. Aljadeff, M. Stern, and T. Sharpee, Transition to chaos in random networks with cell-type-specific connectivity, Phys. Rev. Lett. 114, 088101 (2015).
 - B. DePasquale, C. J. Cueva, K. Rajan, G. S. Escola, and L. F. Abbott, full-force: A target-based method for training recurrent networks, PLoS One 13, e0191527 (2018).
 - T. Asabuki and C. Clopath, Taming the chaos gently: A predictive alignment learning rule in recurrent neural networks, Nat. Commun. 16, 6784 (2025).
 - L. K. Scheffer, C. S. Xu, M. Januszewski, Z. Lu, S.-y. Takemura, K. J. Hayworth, G. B. Huang, K. Shinomiya, J. Maitlin-Shepard, S. Berg et al., A connectome and analysis of the adult drosophila central brain, eLife 9, e57443 (2020).
 - S. Loomba, J. Straehle, V. Gangadharan, N. Heike, A. Khalifa, A. Motta, N. Ju, M. Sievers, J. Gempt, H. S. Meyer et al., Connectomic comparison of mouse and human cortex, Science 377, eabo0924 (2022).
 - A. Shapson-Coe, M. Januszewski, D. R. Berger, A. Pope, Y. Wu, T. Blakely, R. L. Schalek, P. H. Li, S. Wang, J. Maitin-Shepard et al., A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution, Science 384, eadk4858 (2024).
 - The MICrONS Consortium, Functional connectomics spanning multiple areas of mouse visual cortex, Nature (London) 640, 435 (2025).
 - M. R. Tavakoli, J. Lyudchik, M. Januszewski, V. Vistunou, N. Agudelo Dueñas, J. Vorlaufer, C. Sommer, C. Kreuzinger, B. Oliveira, A. Cenameri et al., Light-microscopy-based connectomic reconstruction of mammalian brain tissue, Nature (London) 642, 398 (2025).
 - S. Song, P. J. Sjöström, M. Reigl, S. Nelson, and D. B. Chklovskii, Highly nonrandom features of synaptic connectivity in local cortical circuits, PLOS Biol. 3, e68 (2005).
 - V. Thibeault, A. Allard, and P. Desrosiers, The low-rank hypothesis of complex systems, Nat. Phys. 20, 294 (2024).
 - I. Akjouj, M. Barbier, M. Clenet, W. Hachem, M. Maïda, F. Massol, J. Najim, and V. C. Tran, Complex systems in ecology: A guided tour with large Lotka–Volterra models and random matrices, Proc. R. Soc. A 480, 20230284 (2024).
 - Z. Wang, W. Mai, Y. Chai, K. Qi, H. Ren, C. Shen, S. Zhang, G. Tan, Y. Hu, and Q. Wen, The geometry and dimensionality of brain-wide activity, eLife 14, RP100666 (2025).
 - L. Pezon, V. Schmutz, and W. Gerstner, Linking neural manifolds to circuit structure in recurrent networks, bioRxiv 2024 (2024).
 - F. Schuessler, F. Mastrogiuseppe, A. Dubreuil, S. Ostojic, and O. Barak, The interplay between randomness and structure during learning in RNNs, Adv. Neural Inf. Process. Syst. 33, 13352 (2020).
 - C. H. Martin and M. W. Mahoney, Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning, J. Mach. Learn. Res. 22, 1 (2021).
 - D. G. Clark, L. F. Abbott, and H. Sompolinsky, Symmetries and continuous attractors in disordered neural circuits, bioRxiv 2025 (2025).
 - E. M. Trautmann, J. K. Hesse, G. M. Stine, R. Xia, S. Zhu, D. J. O’Shea, B. Karsh, J. Colonell, F. F. Lanfranchi, S. Vyas et al., Large-scale high-density brain-wide neural recording in nonhuman primates, Nat. Neurosci. 28, 1562 (2025).
 - C. Stringer, M. Pachitariu, N. Steinmetz, M. Carandini, and K. D. Harris, High-dimensional geometry of population responses in visual cortex, Nature (London) 571, 361 (2019).
 - C. Stringer, M. Pachitariu, N. Steinmetz, C. B. Reddy, M. Carandini, and K. D. Harris, Spontaneous behaviors drive multidimensional, brainwide activity, Science 364, eaav7893 (2019).
 - J. Manley, S. Lu, K. Barber, J. Demas, H. Kim, D. Meyer, F. M. Traub, and A. Vaziri, Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number, Neuron 112, 1694 (2024).
 - S. Chung and L. F. Abbott, Neural population geometry: An approach for understanding biological and artificial neural networks, Curr. Opin. Neurobiol. 70, 137 (2021).
 - J. P. Cunningham and M. Y. Byron, Dimensionality reduction for large-scale neural recordings, Nat. Neurosci. 17, 1500 (2014).
 - P. Gao and S. Ganguli, On simplicity and complexity in the brave new world of large-scale neuroscience, Curr. Opin. Neurobiol. 32, 148 (2015).
 - E. M. Trautmann, S. D. Stavisky, S. Lahiri, K. C. Ames, M. T. Kaufman, D. J. O’Shea, S. Vyas, X. Sun, S. I. Ryu, S. Ganguli et al., Accurate estimation of neural population dynamics without spike sorting, Neuron 103, 292 (2019).
 - L. Meshulam and W. Bialek, Statistical mechanics for networks of real neurons, Rev. Mod. Phys. (to be published), 10.1103/jcrn-3nrc.
 - D. G. Clark, L. F. Abbott, and A. Litwin-Kumar, Dimension of activity in random neural networks, Phys. Rev. Lett. 131, 118401 (2023).
 
This random structure, rather than exact, deterministic orthonormality, offers analytical tractability by eliminating the need to enforce orthonormality—a complicated global constraint; but see also Ref. [41], which handles random orthonormal matrices in a regression setting.
- A. Ingrosso, Optimal learning with excitatory and inhibitory synapses, PLoS Comput. Biol. 16, e1008536 (2020).
 - F. Mastrogiuseppe and S. Ostojic, Linking connectivity, dynamics, and computations in low-rank recurrent neural networks, Neuron 99, 609 (2018).
 - F. Schuessler, A. Dubreuil, F. Mastrogiuseppe, S. Ostojic, and O. Barak, Dynamics of random recurrent networks with correlated low-rank structure, Phys. Rev. Res. 2, 013111 (2020).
 - M. Beiran, A. Dubreuil, A. Valente, F. Mastrogiuseppe, and S. Ostojic, Shaping dynamics with multiple populations in low-rank recurrent networks, Neural Comput. 33, 1572 (2021).
 - A. Dubreuil, A. Valente, M. Beiran, F. Mastrogiuseppe, and S. Ostojic, The role of population structure in computations through neural dynamics, Nat. Neurosci. 25, 783 (2022).
 
The Gaussian assumption for individual components is not crucial—our results depend on only first and second moments for large N due to the central limit theorem; higher-order cumulants (scaling appropriately with 1/N) are subdominant in the path-integral saddle-point approximation.
- R. Vershynin, High-Dimensional Probability: An Introduction with Applications in Data Science (Cambridge University Press, Cambridge, England, 2018), Vol. 47.
 
This follows from using Wick’s theorem to evaluate the expectations of tr(LDRTRDLT) and tr(LDRTRDLTLDRTRDLT) for the numerator and denominator, respectively, of PRS.
- J. Pennington, S. Schoenholz, and S. Ganguli, Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice, in Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA (2017), https://papers.nips.cc/paper_files/paper/2017/hash/d9fc0cdb67638d50f411432d0d41d0ba-Abstract.html.
 - J. Pennington, S. Schoenholz, and S. Ganguli, The emergence of spectral universality in deep networks, in International Conference on Artificial Intelligence and Statistics (PMLR, 2018), pp. 1924–1932.
 
This behaves similarly to the more conventional ϕ(x)=tanh(x) but allows for analytical evaluation of Gaussian integrals. 1.
This duality is reminisral networks, Ph.D. thesis, Université Paris sciences et lettres, 2017.