Abstract
Spiking neural networks (SNNs) offer a promising paradigm for modeling brain dynamics and developing neuromorphic intelligence, yet an online learning system capable of training rich spiking dynamics over long horizons with low memory footprints has been missing. Existing online approaches either incur quadratic memory growth, sacrifice biological fidelity through oversimplified models, or lack end-to-end automated tooling. Here, we introduce BrainTrace, a model-agnostic, linear-memory, and automated online learning system for spiking neural networks. BrainTrace standardizes model specification to encompass diverse neuronal and synaptic dynamics; implements a linear-memory online learning rule by exploiting intrinsic properties of spiking dynamics; and provides a compiler…
Abstract
Spiking neural networks (SNNs) offer a promising paradigm for modeling brain dynamics and developing neuromorphic intelligence, yet an online learning system capable of training rich spiking dynamics over long horizons with low memory footprints has been missing. Existing online approaches either incur quadratic memory growth, sacrifice biological fidelity through oversimplified models, or lack end-to-end automated tooling. Here, we introduce BrainTrace, a model-agnostic, linear-memory, and automated online learning system for spiking neural networks. BrainTrace standardizes model specification to encompass diverse neuronal and synaptic dynamics; implements a linear-memory online learning rule by exploiting intrinsic properties of spiking dynamics; and provides a compiler that automatically generates optimized online-learning code for arbitrary user-defined models. Across diverse dynamics and tasks, BrainTrace achieves strong learning performance with a low memory footprint and high computational throughput. Critically, these properties enable online fitting of a whole-brain-scale Drosophila SNN that recapitulates region-level functional activity. By reconciling generality, efficiency, and usability, BrainTrace establishes a foundation for spiking network modeling at scale.
Data availability
The datasets used in this study are publicly available and open source. The N-MNIST dataset26 is freely available at https://www.garrickorchard.com/datasets/n-mnist. The SHD dataset25 is publicly available at https://zenkelab.org/resources/spiking-heidelberg-datasets-shd. The IBM DVS Gesture dataset24 can be downloaded from https://ibm.ent.box.com/s/3hiq58ww1pbbjrinh367ykfdf60xsfm8/folder/50167556794. The DMTS and evidence accumulation tasks are generated in this study and can be found in the publicly available GitHub repository https://github.com/chaobrain/braintrace-snn-experiments57. The whole-brain calcium imaging data of Drosophila18 can be obtained at https://doi.org/10.6084/m9.figshare.13349282.
Code availability
BrainTrace is distributed via the PyPI package index (https://pypi.org/project/braintrace) and publicly released on GitHub (https://github.com/chaobrain/braintrace) under the license of Apache License v2.0. Its documentation is hosted on the free documentation hosting platform Read the Docs (https://braintrace.readthedocs.io/). BrainTrace can be used in Windows, macOS, and Linux operating systems. The code to reproduce the experimental evaluations of Figs. 4, 5, and Table 1 is publicly available from the GitHub repository https://github.com/chaobrain/braintrace-snn-experiments57. The code for the modeling of Fig. 6 is publicly available from the GitHub repository https://github.com/chaobrain/fitting_drosophila_whole_brain_spiking_model58. The code for the SHD dataset evaluation is available at https://github.com/chaobrain/braintrace-shd-experiments59.
References
Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607–617 (2019).
Yamazaki, K., Vo-Ho, V.-K., Bulsara, D. & Le, N. Spiking neural networks and their applications: A review. Brain Sci. 12, 863 (2022).
Neftci, E. O., Mostafa, H. & Zenke, F. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Process. Mag. 36, 51–63 (2019).
Bellec, G., Salaj, D., Subramoney, A., Legenstein, R. & Maass, W. Long short-term memory and learning-to-learn in networks of spiking neurons. Adv. Neural Inf. Process. Syst. 31, 787–797 (2018). 1.
Huh, D. & Sejnowski, T. J. Gradient descent for spiking neural networks. Adv. Neural Inf. Process. Syst.** 31**, 1433–1443 (2018). 1.
Mehonic, A. & Kenyon, A. J. Brain-inspired computing needs a master plan. Nature 604, 255–260 (2022).
Lobo, J. L., Del Ser, J., Bifet, A. & Kasabov, N. Spiking neural networks and online learning: An overview and perspectives. Neural Netw. 121, 88–100 (2020).
Bellec, G. et al. A solution to the learning dilemma for recurrent networks of spiking neurons. Nat. Commun. 11, 3625 (2020).
Bohnstingl, T., Woźniak, S., Pantazi, A. & Eleftheriou, E. Online spatio-temporal learning in deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 34, 8894–8908 (2022). 1.
Xiao, M., Meng, Q., Zhang, Z., He, D. & Lin, Z. Online training through time for spiking neural networks. Adv. Neural Inf. Process. Syst. 35, 20717–20730 (2022).
Summe, T. M., Schaefer, C. J. & Joshi, S. Estimating post-synaptic effects for online training of feed-forward SNNS. 2024 International Conference on Neuromorphic Systems, 264–271 (2024). 1.
Jiang, H., De Masi, G., Xiong, H. & Gu, B. Ndot: neuronal dynamics-based online training for spiking neural networks. Forty-first International Conference on Machine Learning (2024). 1.
Williams, R. J. & Zipser, D. A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 270–280 (1989).
Marschall, O., Cho, K. & Savin, C. A unified framework of online learning algorithms for training recurrent neural networks. J. Mach. Learn. Res. 21, 1–34 (2020).
Dorkenwald, S. et al. Neuronal wiring diagram of an adult brain. Nature 634, 124–138 (2024).
Shiu, P. K. et al. A Drosophila computational brain model reveals sensorimotor processing. Nature 634, 210–219 (2024).
Mann, K., Gallen, C. L. & Clandinin, T. R. Whole-brain calcium imaging reveals an intrinsic functional network in Drosophila. Curr. Biol. 27, 2389–2396 (2017).
Turner, M. H., Mann, K. & Clandinin, T. R. The connectome predicts resting-state functional connectivity across the Drosophila brain. Curr. Biol. 31, 2386–2394 (2021).
Brette, R. et al. Simulation of networks of spiking neurons: a review of tools and strategies. J. Comput. Neurosci. 23, 349–398 (2007).
Wang, C. et al. A differentiable brain simulator bridging brain simulation and brain-inspired computing. International Conference on Learning Representations (2024). 1.
Paszke, A. et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019). 1.
Abadi, M. et al. TensorFlow: a system for large-scale machine learning. 12th USENIX symposium on operating systems design and implementation, 265–283 (2016). 1.
Frostig, R., Johnson, M. J. & Leary, C. Compiling machine learning programs via high-level tracing. Syst. Machine Learn. 4 (2018). 1.
Amir, A. et al. A low-power, fully event-based gesture recognition system. Proceedings of the IEEE conference on computer vision and pattern recognition, 7243–7252 (2017). 1.
Cramer, B., Stradmann, Y., Schemmel, J. & Zenke, F. The Heidelberg spiking data sets for the systematic evaluation of spiking neural networks. IEEE Trans. Neural Netw. Learn. Syst. 33, 2744–2757 (2020). 1.
Orchard, G., Jayawant, A., Cohen, G. K. & Thakor, N. Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 9, 437 (2015).
Chudasama, Y. Delayed (non) match-to-sample task. Encyclopedia of Psychopharmacology, 372–372 (2010). 1.
Motta, A. et al. Dense connectomic reconstruction in layer 4 of the somatosensory cortex. Science 366, eaay3134 (2019).
Bittar, A. & Garner, P. N. A surrogate gradient spiking baseline for speech command recognition. Front. Neurosci. 16, 865897 (2022).
Subramoney, A., Nazeer, K. K., Schöne, M., Mayr, C. & Kappel, D. Efficient recurrent architectures through activity sparsity and sparse back-propagation through time. International Conference on Learning Representations (2022). 1.
Quintana, F. M. et al. Etlp: Event-based three-factor local plasticity for online learning with neuromorphic hardware. Neuromorph. Comput. Eng. 4, 034006 (2024).
Ortner, T., Pes, L., Gentinetta, J., Frenkel, C. & Pantazi, A. Online spatio-temporal learning with target projection. 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems, 1–5 (2023). 1.
Apolinario, M. P. E. & Roy, K. S-TLLR: STDP-inspired temporal local learning rule for spiking neural networks. Trans. Mach. Learn. Res. 10, 1–40 (2025). 1.
Yin, B., Corradi, F. & Bohté, S. M. Accurate online training of dynamical spiking neural networks through forward propagation through time. Nat. Mach. Intell. 5, 518–527 (2023).
Hammouamri, I., Khalfaoui-Hassani, I. & Masquelier, T. Learning delays in spiking neural networks using dilated convolutions with learnable spacings. International Conference on Learning Representations (2024). 1.
Zhu, Y., Ding, J., Huang, T., Xie, X. & Yu, Z. Online stabilization of spiking neural networks. International Conference on Learning Representations (2024). 1.
Fang, W. et al. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. Proceedings of the IEEE/CVF international conference on computer vision, 2661–2671 (2021). 1.
Mihalaş, Ş. & Niebur, E. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors. Neural Comput. 21, 704–718 (2009).
Vogels, T. P. & Abbott, L. F. Signal propagation and logic gating in networks of integrate-and-fire neurons. J. Neurosci. 25, 10786–10795 (2005).
Van Vreeswijk, C. & Sompolinsky, H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726 (1996).
Morcos, A. S. & Harvey, C. D. History-dependent variability in population dynamics during evidence accumulation in cortex. Nat. Neurosci. 19, 1672–1681 (2016).
Attwell, D. & Laughlin, S. B. An energy budget for signaling in the grey matter of the brain. J. Cereb. Blood Flow. Metab. 21, 1133–1145 (2001).
Kloppenburg, P. & Nawrot, M. P. Neural coding: sparse but on time. Curr. Biol. 24, R957–R959 (2014).
Wang, C. et al. Brainpy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. elife 12, e86365 (2023).
Wang, C., He, S., Luo, S., Huan, Y. & Wu, S. Integrating physical units into high-performance AI-driven scientific computing. Nat. Commun. 16, 3609 (2025).
Koch, C. & Jones, A. Big science, team science, and open science for neuroscience. Neuron 92, 612–616 (2016).
MICrONS Consortium Functional connectomics spanning multiple areas of mouse visual cortex. Nature 640, 435–447 (2025).
Morrison, A., Diesmann, M. & Gerstner, W. Phenomenological models of synaptic plasticity based on spike timing. Biol. Cybern. 98, 459–478 (2008).
Bu, T. et al. Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. International Conference on Learning Representations (2022). 1.
Olshausen, B. A. & Field, D. J. Sparse coding of sensory inputs. Curr. Opin. Neurobiol. 14, 481–487 (2004).
Murray, J. M. Local online learning in recurrent networks with random feedback. Elife 8, e43299 (2019).
Mujika, A., Meier, F. & Steger, A. Approximating real-time recurrent learning with random Kronecker factors. Adv. Neural Inf. Process. Syst. 31, 6594–6603 (2018). 1.
Zenke, F. & Ganguli, S. Superspike: Supervised learning in multilayer spiking neural networks. Neural Comput. 30, 1514–1541 (2018).
Daniel, T. A., Katz, J. S. & Robinson, J. L. Delayed match-to-sample in working memory: A brainmap meta-analysis. Biol. Psychol. 120, 10–20 (2016).
Cho, K. et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Conference on Empirical Methods in Natural Language Processing (2014). 1.
Bradbury, J. et al. JAX: composable transformations of Python+NumPy programs http://github.com/google/jax (2018). 1.
Wang, C. chaobrain/braintrace-snn-experiments: Release version 0.1 https://doi.org/10.5281/zenodo.17847403 (2025). 1.
Wang, C. chaobrain/fitting_drosophila_whole_brain_spiking_model: Release version 0.1 https://doi.org/10.5281/zenodo.17892849 (2025). 1.
Wang, C. & Msra Xmq. chaobrain/braintrace-shd-experiments: Release version 0.1 https://doi.org/10.5281/zenodo.17791424 (2025). 1.
Yin, Y., Chen, X., Ma, C., Wu, J. & Tan, K. C. Efficient online learning for networks of two-compartment spiking neurons. 2024 International Joint Conference on Neural Networks, 1–8 (2024). 1.
Nowotny, T., Turner, J. P. & Knight, J. C. Loss shaping enhances exact gradient learning with eventprop in spiking neural networks. Neuromorph. Comput. Eng. 5, 014001 (2025).
Samadzadeh, A., Far, F. S. T., Javadi, A., Nickabadi, A. & Chehreghani, M. H. Convolutional spiking neural networks for spatio-temporal feature extraction. Neural Process. Lett. 55, 6979–6995 (2023).
Sun, H. et al. A synapse-threshold synergistic learning approach for spiking neural networks. IEEE Trans. Cogn. Dev. Syst. 16, 544–558 (2023).
Acknowledgements
This work was supported by the Young Scientists Fund of the National Natural Science Foundation of China (No. 3240070449, CMW), the National Natural Science Foundation of China (No. T2421004, SW), and the Science and Technology Innovation 2030-Brain Science and Brain-inspired Intelligence Project (No. 2021ZD0200204, SW).
Author information
Authors and Affiliations
Guangdong Institute of Intelligence Science and Technology, Hengqin, Zhuhai, Guangdong, China
Chaoming Wang, Yuxiang Huan & Si Wu 1.
School of Psychological and Cognitive Sciences, Peking University, Beijing, China
Xingsi Dong & Si Wu 1.
Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
Xingsi Dong & Si Wu 1.
Institute of Cognitive Neuroscience, University College London, London, England, UK
Zilong Ji 1.
Microsoft Research Asia, Shanghai, China
Mingqing Xiao 1.
Westlake Institute for Advanced Study, Westlake University, Hangzhou, Zhejiang, China
Jiedong Jiang 1.
Janelia Research Campus, Howard Hughes Medical Institute, Chevy Chase, MA, USA
Xiao Liu 1.
PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
Si Wu 1.
Center of Quantitative Biology, Peking University, Beijing, China
Si Wu
Authors
- Chaoming Wang
- Xingsi Dong
- Zilong Ji
- Mingqing Xiao
- Jiedong Jiang
- Xiao Liu
- Yuxiang Huan
- Si Wu
Contributions
Conceptualization and Methodology: C.M.W., X.S.D., J.D.J., S.W. Software and Investigation: C.M.W. Analysis: C.M.W., X.S.D., Z.L.J., M.Q.X., J.D.J., X.L. Theorem Proof: X.S.D., C.M.W. Visualization: C.M.W., X.L., Z.L.J. Writing: C.M.W., Z.L.J., S.W. Writing (Review & Editing): C.M.W., Z.L.J., M.Q.X., X.S.D., J.D.J., X.L., SW. Resources: C.M.W., Y.X.H., S.W. Funding Acquisition: C.M.W., S.W.
Corresponding authors
Correspondence to Chaoming Wang or Si Wu.
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Thomas Nowotny and the other anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wang, C., Dong, X., Ji, Z. et al. Model-agnostic linear-memory online learning in spiking neural networks. Nat Commun (2026). https://doi.org/10.1038/s41467-026-68453-w
Received: 23 October 2024
Accepted: 05 January 2026
Published: 19 January 2026
DOI: https://doi.org/10.1038/s41467-026-68453-w