Data availability
The data, documentation and code can be accessed with DataLad by cloning the dataset repository from Github (https://github.com/courtois-neuromod/cneuromod-things), or by downloading an archived version (1.0.1) of the repository hosted on Zenodo1. The repository contains a nested set of submodules with symbolic links to larger data files hosted remotely on the Canadian Open Neuroscience Platform[54](https://www.nature.com/articles/s41597-026-06591-y#ref-CR54 “Harding, R. J. et al. The Canadian Open Neuroscience Platform-An open science …
Data availability
The data, documentation and code can be accessed with DataLad by cloning the dataset repository from Github (https://github.com/courtois-neuromod/cneuromod-things), or by downloading an archived version (1.0.1) of the repository hosted on Zenodo1. The repository contains a nested set of submodules with symbolic links to larger data files hosted remotely on the Canadian Open Neuroscience Platform54 (CONP) servers. Data files can be pulled with DataLad via symbolic links without requiring registered access. Data are released under a liberal Creative Commons (CC0) license that authorizes the re-sharing of derivatives.
Code availability
BOLD data were acquired with the PsychoPy library (https://www.psychopy.org/) and preprocessed with the fMRIprep pipeline38,39 (versions 20.2.3 and 20.2.5; https://fmriprep.org/en/stable/). All CNeuroMod data acquisition and data preprocessing scripts are available on the CNeuroMod GitHub (https://github.com/courtois-neuromod).
• Data acquisition scripts: https://github.com/courtois-neuromod/task_stimuli
• BOLD data preprocessing scripts: https://github.com/courtois-neuromod/ds_prep
The code used to generate derivatives from the preprocessed CNeuroMod-THINGS data is integrated throughout the https://github.com/courtois-neuromod/cneuromod-things repository and its submodules. This repository includes scripts to:
• extract trial-wise and image-wise beta scores per voxel from preprocessed BOLD data (‘THINGS/glmsingle/code/glmsingle’)
• generate maps of temporal signal-to-noise ratio from preprocessed BOLD data (‘THINGS/tsnr/code‘, ‘retinotopy/tsnr/code’, ‘fLoc/tsnr/code’,)
• perform t-tests and GLM fixed-effects analyses to assess memory recognition effects on the BOLD data (‘THINGS/glm-memory/code’)
• quantify in-scan head motion (‘THINGS/glmsingle/code/qc’)
• organize trial-wise metrics (stimulus image annotations, task conditions, task accuracy, reaction time, gaze fixation compliance; ‘THINGS/fmriprep/sourcedata/things/code’)
• analyze behavioural performance on the image recognition task (‘THINGS/behaviour/code’)
• process eye-tracking data (‘THINGS/fmriprep/sourcedata/things/code’)
• perform data-driven analyses to characterize stimulus representation in the brain signal (‘THINGS/glmsingle/code/descriptive’)
• derive ROI masks from two functional localizer tasks (‘retinotopy/prf/code’ and ‘fLoc/rois/code’)
• visualize results onto flattened cortical maps of the subjects’ brains (‘anatomical/pycortex’)
The cneuromod-things repository also includes a collection of Jupyter Notebooks with step-by-step instructions and code to pull data files directly from the DataLad collection and reproduce the figures included in the current manuscript. The content of these Notebooks can be visualized directly on Github (https://github.com/courtois-neuromod/cneuromod-things/tree/main/datapaper), and provides concrete examples of analyses that can be conducted with the current dataset.
References
St-Laurent, M. et al. cneuromod-things. Zenodo https://doi.org/10.5281/ZENODO.17881592 (2025). 1.
Hebart, M. N. et al. THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images. PLoS One 14, e0223792 (2019).
Stoinski, L. M. & Hebart, M. N. THINGS object concept and object image database. OSF https://doi.org/10.17605/OSF.IO/JUM2F (2019). 1.
Chang, N. et al. BOLD5000, a public fMRI dataset while viewing 5000 visual images. Sci. Data 6, 49 (2019).
Chang, N., Pyles, J., Prince, J., Tarr, M. & Aminoff, E. BOLD5000 Collection. Carnegie Mellon University https://doi.org/10.1184/R1/C.5325683 (2021).
Allen, E. J. et al. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 25, 116–126 (2022).
Hebart, M. N. et al. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. Elife 12 (2023). 1.
Hebart, M. et al. THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior. Figshare+ https://doi.org/10.25452/FIGSHARE.PLUS.C.6161151 (2025). 1.
Hebart, M. N. et al. THINGS-fMRI. Openneuro https://doi.org/10.18112/OPENNEURO.DS004192.V1.0.5 (2022). 1.
Gong, Z. et al. A large-scale fMRI dataset for the visual processing of naturalistic scenes. Sci. Data 10, 559 (2023).
Gong, Z. et al. A large-scale fMRI dataset for the visual processing of naturalistic scenes. OpenNeuro https://doi.org/10.18112/openneuro.ds004496.v2.1.2 (2023). 1.
Grootswagers, T., Zhou, I., Robinson, A. K., Hebart, M. N. & Carlson, T. A. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 9, 3 (2022).
Grootswagers, T., Zhou, I., Robinson, A. & Carlson, T. Human electroencephalography recordings from 50 subjects for 22,248 images from 1,854 object concepts. Openneuro https://doi.org/10.18112/OPENNEURO.DS003825.V1.1.0 (2021). 1.
Grootswagers, T. THINGS-EEG: Human electroencephalography recordings for 22,248 images from 1,854 object concepts. figshare https://doi.org/10.6084/M9.FIGSHARE.14721282 (2021). 1.
Zheng, C. Y., Pereira, F., Baker, C. I. & Hebart, M. N. Revealing interpretable object representations from human behavior. arXiv [stat.ML] Preprint at http://arxiv.org/abs/1901.02915 (2019). 1.
Hebart, M. N., Zheng, C. Y., Pereira, F. & Baker, C. I. Revealing the multidimensional mental representations of natural objects underlying human similarity judgements. Nat Hum Behav 4, 1173–1185 (2020).
Stoinski, L. M., Perkuhn, J. & Hebart, M. N. THINGSplus: New norms and metadata for the THINGS database of 1854 object concepts and 26,107 natural object images. Behav. Res. Methods https://doi.org/10.3758/s13428-023-02110-8 (2023). 1.
Kramer, M. A., Hebart, M. N., Baker, C. I. & Bainbridge, W. A. The features underlying the memorability of objects. Sci Adv 9, eadd2981 (2023).
Boyle, J. et al. The Courtois NeuroMod project: quality assessment of the initial data release (2020). in 2023 Conference on Cognitive Computational Neuroscience. https://doi.org/10.32470/ccn.2023.1602-0 (Cognitive Computational Neuroscience, Oxford, United Kingdom, 2023). 1.
Thirion, B., Thual, A. & Pinho, A. L. From deep brain phenotyping to functional atlasing. Curr. Opin. Behav. Sci. 40, 201–212 (2021).
Poldrack, R. A. et al. Long-term neural and physiological phenotyping of a single human. Nat. Commun. 6, 8885 (2015).
Poldrack, R. Myconnectome. OpenNeuro https://doi.org/10.18112/openneuro.ds000031.v2.0.2 (2023). 1.
Gordon, E. M. et al. The Midnight Scan Club (MSC) dataset. OpenNeuro. https://doi.org/10.18112/openneuro.ds000224.v1.0.4 (2023).
Pinho, A. L. et al. Individual Brain Charting, a high-resolution fMRI dataset for cognitive mapping. Sci. Data 5, 180105 (2018).
Pinho, A. L. et al. Individual Brain Charting dataset extension, second release of high-resolution fMRI data for cognitive mapping. Sci. Data 7, 353 (2020).
Pinho, A. L. et al. Individual Brain Charting dataset extension, third release for movie watching and retinotopy data. Sci. Data 11, 590 (2024).
Pinho, A. L. G., Hertz-Pannier, L. & Thirion, B. IBC. OpenNeuro https://doi.org/10.18112/openneuro.ds002685.v2.0.0 (2024). 1.
Gifford, A. T. et al. The Algonauts Project 2025 challenge: How the Human Brain Makes Sense of Multimodal Movies. arXiv [q-bio.NC] https://doi.org/10.48550/ARXIV.2501.00504 (2025). 1.
Stigliani, A., Weiner, K. S. & Grill-Spector, K. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific. J. Neurosci. 35, 12412–12424 (2015).
Kay, K. N., Winawer, J., Mezer, A. & Wandell, B. A. Compressive spatial summation in human visual cortex. J. Neurophysiol. 110, 481–494 (2013).
Thaler, L., Schütz, A. C., Goodale, M. A. & Gegenfurtner, K. R. What is the best fixation target? The effect of target shape on stability of fixational eye movements. Vision Res. 76, 31–42 (2013).
Power, J. D. et al. Customized head molds reduce motion during resting state fMRI scans. Neuroimage 189, 141–149 (2019).
Peirce, J. et al. PsychoPy2: Experiments in behavior made easy. Behav. Res. Methods 51, 195–203 (2019).
Kassner, M., Patera, W. & Bulling, A. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. arXiv [cs.CV] (2014). 1.
Harel, Y. et al. Open design and validation of a reproducible videogame controller for MRI and MEG. PsyArXiv https://doi.org/10.31234/osf.io/m2x6y (2022). 1.
Xu, J. et al. Evaluation of slice accelerations using multiband echo planar imaging at 3 T. Neuroimage 83, 991–1001 (2013).
Glasser, M. F. et al. The Human Connectome Project’s neuroimaging approach. Nat. Neurosci. 19, 1175–1187 (2016).
Esteban, O. et al. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat. Methods 16, 111–116 (2019).
Esteban, O. et al. Analysis of task-based functional MRI data preprocessed with fMRIPrep. Nat. Protoc. 15, 2186–2202 (2020).
Boudreau, M. et al. Longitudinal reproducibility of brain and spinal cord quantitative MRI biomarkers. Imaging Neuroscience 3 (2025). 1.
Esteban, O., Markiewicz, C. J., Blair, R., Poldrack, R. A. & Gorgolewski, K. J. sMRIPrep: Structural MRI PREProcessing Workflows. https://doi.org/10.5281/ZENODO.15579662 (Zenodo, 2025). 1.
Courtois Project on Neuronal Modelling. CNeuroMod Documentation Version 82adf004. https://doi.org/10.5281/zenodo.17644207 (2025). 1.
Esteban, O. et al. Nipy/nipype: 1.10.0. https://doi.org/10.5281/ZENODO.15054182 (Zenodo, 2025). 1.
Gao, J. S., Huth, A. G., Lescroart, M. D. & Gallant, J. L. Pycortex: an interactive surface visualizer for fMRI. Front. Neuroinform. 9, 23 (2015).
Prince, J. S. et al. Improving the accuracy of single-trial fMRI response estimates using GLMsingle. Elife 11 (2022). 1.
Kay, K. N., Rokem, A., Winawer, J., Dougherty, R. F. & Wandell, B. A. GLMdenoise: a fast, automated technique for denoising task-based fMRI data. Front. Neurosci. 7, 247 (2013).
Kay, K. analyzePRF: stimuli and code for pRF analysis. Accessed on July 14, 2021. http://kendrickkay.net/analyzePRF. 1.
Benson, N. C. et al. The Human Connectome Project 7 Tesla retinotopy dataset: Description and population receptive field analysis. J. Vis. 18, 23 (2018).
Benson, N. C. & Winawer, J. Bayesian analysis of retinotopic maps. Elife 7 (2018). 1.
Nilearn contributors et al. Nilearn. https://doi.org/10.5281/ZENODO.8397156 (Zenodo, 2025). 1.
Julian, J. B., Fedorenko, E., Webster, J. & Kanwisher, N. An algorithmic method for functionally defining regions of interest in the ventral visual pathway. Neuroimage 60, 2357–2364 (2012).
Kanwisher, N. GSS. Kanwisher Lab. Accessed on November 15, 2022. https://web.mit.edu/bcs/nklab/GSS.shtml#download. 1.
Halchenko, Y. O. et al. DataLad: distributed system for joint management of code, data, and their relationship. J. Open Source Softw. 6, 3262 (2021).
Harding, R. J. et al. The Canadian Open Neuroscience Platform-An open science framework for the neuroscience community. PLoS Comput. Biol. 19, e1011230 (2023).
Gorgolewski, K. J. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci. Data 3, 160044 (2016).
Epstein, R. A., Parker, W. E. & Feiler, A. M. Two kinds of FMRI repetition suppression? Evidence for dissociable neural mechanisms. J. Neurophysiol. 99, 2877–2886 (2008).
Larsson, J. & Smith, A. T. fMRI repetition suppression: neuronal adaptation or stimulus expectation? Cereb. Cortex 22, 567–576 (2012).
Barron, H. C., Garvert, M. M. & Behrens, T. E. J. Repetition suppression: a means to index neural representations using BOLD? Philos. Trans. R. Soc. Lond. B Biol. Sci. 371 (2016). 1.
Maaten, L. & Hinton, G. Visualizing high-dimensional data using t-sne journal of machine learning research. J. Mach. Learn. Res.
Acknowledgements
This work was supported by the Courtois Foundation and an NSERC discovery grant awarded to LB (RGPIN-2025-06022), and by a Max Planck Research Group Grant (M.TN.A.NEPF0009) and an ERC Starting Grant COREDIM (StG-2021-101039712) awarded to MNH.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Marie St-Laurent, Oliver Contier, Katja Seeliger & Martin N. Hebart 1.
Centre de recherche de l’Institut universitaire de gériatrie de Montréal, Montréal, Canada
Marie St-Laurent, Basile Pinsard, Elizabeth DuPre, Valentina Borghesani, Julie A. Boyle & Lune Bellec 1.
Département de psychologie, Université de Montréal, Montréal, Canada
Elizabeth DuPre & Lune Bellec 1.
Martin Luther University Halle-Wittenberg, Medical Faculty, Halle, Germany
Katja Seeliger 1.
Faculté de psychologie et des sciences de l’éducation, Université de Genève, Genève, Switzerland
Valentina Borghesani 1.
Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
Martin N. Hebart 1.
Center for Mind, Brain and Behavior (CMBB), Universities of Marburg, Giessen, Darmstadt, Germany
Martin N. Hebart
Authors
- Marie St-Laurent
- Basile Pinsard
- Oliver Contier
- Elizabeth DuPre
- Katja Seeliger
- Valentina Borghesani
- Julie A. Boyle
- Lune Bellec
- Martin N. Hebart
Contributions
M.S.L.: Methodology, Software, Data curation, Formal analysis, Visualization, Writing – original draft, review & editing B.P.: Conceptualization, Project administration, Investigation, Methodology, Software, Data curation, Formal analysis, Visualization O.C.: Software, Formal analysis, Visualization, Writing – review & editing E.D.: Software, Formal analysis, Data curation, Writing – review & editing K.S.: Software, Writing – review & editing V.B.: Conceptualization J.B.: Conceptualization, Funding acquisition, Resources, Project administration, Investigation, Data curation, Writing – review & editing L.B.: Conceptualization, Funding acquisition, Resources, Project administration, Investigation, Data curation, Formal analysis, Visualization, Writing – review & editing M.N.H.: Conceptualization, Funding acquisition, Supervision, Methodology, Software, Data curation, Formal analysis, Visualization, Writing – original draft, review & editing.
Corresponding author
Correspondence to Marie St-Laurent.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
St-Laurent, M., Pinsard, B., Contier, O. et al. CNeuroMod-THINGS, a densely-sampled fMRI dataset for visual neuroscience. Sci Data (2026). https://doi.org/10.1038/s41597-026-06591-y
Received: 25 August 2025
Accepted: 08 January 2026
Published: 29 January 2026
DOI: https://doi.org/10.1038/s41597-026-06591-y