-
Data Descriptor
-
Published: 27 December 2025
Scientific Data , Article number: (2025) Cite this article
We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.
Abstract
Head Mounted Displays…
-
Data Descriptor
-
Published: 27 December 2025
Scientific Data , Article number: (2025) Cite this article
We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.
Abstract
Head Mounted Displays (HMDs) employ mixed reality (MR) technologies across various fields, affording users immersive visual experiences. However, prolonged use of HMDs devices to watch stereoscopic content may lead to visual fatigue. This study explores stereo-visual fatigue induced by viewing MR visual stimuli, focusing on EEG and peripheral physiological signals. We recorded 24-channel EEG signals from 23 healthy participants and combined them with electrocardiogram (ECG), electrodermal activity (EDA), peripheral oxygen saturation (SpO2), respiratory, and skin temperature data from 13 participants. These physiological signal characteristics were extracted and combined into multimodal datasets. These datasets have shown significant effects in identifying visual fatigue through experimental verification. The study highlights the need for publicly available datasets to further investigate stereo-visual fatigue in an MR environment.
Data availability
The raw EEG dataset required is publicly accessible via the OpenNeuro platform (https://doi.org/10.18112/openneuro.ds005416.v1.0.0)20. Other relevant datasets can be obtained through the Figshare platform (https://doi.org/10.6084/m9.figshare.30359713)21.
Code availability
The code for experiment design and models are openly available on the GitHub repository. Please assess via the following link: https://github.com/taochunguang2022/mixed_reality_stereo_vision_model. The preprocessing physical signals and code for constructing brain networks can be assessed via the following link: https://github.com/taochunguang2022/mixed_reality_stereo_vision. All experiment protocol scripts were developed in Visual Studio 2022. The experimental scenarios were built using Unity Editor 2021.3.4f1c1 and Mixed Reality Toolkit version 2.8.3. All classification model scripts were developed in Python 3.9, using PyTorch 1.12.0 with CUDA 11.3 and torch-geometric version 2.3.1, as the models primarily involve graph neural networks. In the raw EEG data pre-processing procedure, eeglab2021.1 was used to read the original Neuracle EEG data and was used to organize the data into BIDS format. For all packages and libraries on which the Python project depends, please refer to the requiement.txt file in the GitHub repository.
References
Hu, H. et al. Application and prospect of mixed reality technology in medical field. Current Medical Science 39, 1–6 (2019).
Han, J., Bae, S. H. & Suk, H. J. Visual discomfort and visual fatigue: comparing head-mounted display and smartphones. Journal of the Ergonomics Society of Korea 36(4), 293–303 (2017).
Hirota, M. et al. Comparison of visual fatigue caused by head-mounted display for virtual reality and two-dimensional display using objective and subjective evaluation. Ergonomics 62(6), 759–766 (2019).
Hua, H. Enabling focus cues in head-mounted displays. Proceedings of the IEEE 105(5), 805–824 (2017).
Lambooij, M. et al. Visual discomfort and visual fatigue of stereoscopic displays: a review. Journal of Imaging Science and Technology 53(3), 30201–1 (2009).
Liu, K. et al. A feature fusion method for driving fatigue of shield machine drivers based on multiple physiological signals and auto-encoder. Sustainability 15(12), 9405 (2023).
Fan, L. et al. Eye movement characteristics and visual fatigue assessment of virtual reality games with different interaction modes. Frontiers in Neuroscience 17, 1173127 (2023).
Pan, T., Wang, H., Si, H., Li, Y. & Shang, L. Identification of pilots’ fatigue status based on electrocardiogram signals. Sensors 21(9), 3003 (2021).
Wang, Z., Zhou, X., Wang, W. & Liang, C. Emotion recognition using multimodal deep learning in multiple psychophysiological signals and video. International Journal of Machine Learning and Cybernetics 11(4), 923–934 (2020).
Di Gregorio, F. et al. Advances in EEG-based functional connectivity approaches to the study of the central nervous system in health and disease. Advances in Clinical and Experimental Medicine 32(6), 607–612 (2023).
Zhang, T., Guo, M., Wang, L. & Li, M. Brain fatigue analysis from virtual reality visual stimulation based on granger causality. Displays 73, 102219 (2022).
Yu, M., Li, Y. & Tian, F. Responses of functional brain networks while watching 2D and 3D videos: An EEG study. Biomedical Signal Processing and Control 68, 102613 (2021).
Boronina, A., Maksimenko, V., Badarin, A. & Grubov, V.Decreased brain functional connectivity is associated with faster responses to repeated visual stimuli, The European Physical Journal Special Topics, pp. 1–11 (2024). 1.
Kim, C. J., Park, S., Won, M. J., Whang, M. & Lee, E. C. Autonomic nervous system responses can reveal visual fatigue induced by 3D displays. Sensors 13(10), 13054–13062 (2013).
Park, S., Won, M. J., Mun, S., Lee, E. C. & Whang, M. Does visual fatigue from 3D displays affect autonomic regulation and heart rhythm? International Journal of Psychophysiology 92(1), 42–48 (2014).
Wang, K. et al. Vigilance estimating in SSVEP-based BCI using multimodal signals, in 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 5974–5978 (2021). IEEE. 1.
Malik, A. S. et al. EEG based evaluation of stereoscopic 3D displays for viewer discomfort. Biomedical Engineering Online 14, 1–21 (2015).
Manshouri, N., Maleki, M. & Kayikcioglu, T. An EEG-based stereoscopic research of the PSD differences in pre and post 2D&3D movies watching. Biomedical Signal Processing and Control 55, 101642 (2020).
Kang, M.-K., Cho, H., Park, H.-M., Jun, S. C. & Yoon, K.-J. A wellness platform for stereoscopic 3D video systems using EEG-based visual discomfort evaluation technology. Applied Ergonomics 62, 158–167 (2017).
Wu, Y., Tao, C. & Li, Q. Fatigue Characterization of EEG under Mixed Reality Stereo Vision, OpenNeuro, https://doi.org/10.18112/openneuro.ds005416.v1.0.0 (2024). 1.
Wu, Y., Tao, C. & Li, Q.mixed_reality_stereo_vision_dataset, figshare, https://doi.org/10.6084/m9.figshare.30359713 (2025). 1.
Shen, L., Du, W., Wang, C. & Yue, G. Event-related potentials measurement of perception to 3D motion in depth. China Communications 12(5), 86–93 (2015).
Chen, J., Wang, S., He, E., Wang, H. & Wang, L. The architecture of functional brain network modulated by driving during adverse weather conditions. Cognitive Neurodynamics 17(2), 547–553 (2023).
Delorme, A. and Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis, Journal of Neuroscience Methods, no. 1, pp. 134 (2004). 1.
Pion-Tonachini, L., Kreutz-Delgado, K. & Makeig, S. ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. NeuroImage 198, 181–197 (2019).
Niu, X. et al. Effects of sleep deprivation on functional connectivity of brain regions after high-intensity exercise in adolescents. Sustainability 14(23), 16175 (2022).
Lin, Z. et al. Fatigue driving recognition based on deep learning and graph neural network. Biomedical Signal Processing and Control 68, 102598 (2021).
Achard, S. & Bullmore, E. Efficiency and cost of economical brain functional networks. PLoS Computational Biology 3(2), e17 (2007).
Luo, H., Qiu, T., Liu, C. & Huang, P. Research on fatigue driving detection using forehead EEG based on adaptive multi-scale entropy. Biomedical Signal Processing and Control 51, 50–58 (2019).
Yu, M. et al. EEG-based emotion recognition in an immersive virtual reality environment: From local activity to brain network features. Biomedical Signal Processing and Control 72, 103349 (2022).
Zheng, R., Wang, Z., He, Y. & Zhang, J. EEG-based brain functional connectivity representation using amplitude locking value for fatigue-driving recognition. Cognitive Neurodynamics 16(2), 325–336 (2022).
Ma, X. et al. Enhanced network efficiency of functional brain networks in primary insomnia patients. Frontiers in Psychiatry 9, 46 (2018).
Han, C., Sun, X., Yang, Y., Che, Y. & Qin, Y. Brain complex network characteristic analysis of fatigue during simulated driving based on electroencephalogram signals. Entropy 21(4), 353 (2019).
Gorgolewski, K. J. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data 3(1), 1–9 (2016).
Pernet, C. R. et al. EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data 6(1), 103 (2019).
Silva, I., Moody, G., Scott, D. J., Celi, L. A. & Mark, R. G.Predicting in-hospital mortality of ICU patients: The PhysioNet/Computing in Cardiology Challenge 2012, in 2012 Computing in Cardiology, pp. 245–248, IEEE (2012). 1.
Zhang, X., Zeman, M., Tsiligkaridis, T. & Zitnik, M. Graph-guided network for irregularly sampled multivariate time series arXiv preprint arXiv:2110.05357 (2021). 1.
Dosovitskiy, A. et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale, ICLR (2021). 1.
Che, Z., Purushotham, S., Cho, K., Sontag, D. & Liu, Y. Recurrent neural networks for multivariate time series with missing values. Scientific Reports 8(1), 6085 (2018).
Wu, Z. et al. Connecting the dots: Multivariate time series forecasting with graph neural networks, in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 753–763 (2020). 1.
Horn, M., Moor, M., Bock, C., Rieck, B. & Borgwardt, K. Set functions for time series, in International Conference on Machine Learning, pp. 4353–4363, PMLR (2020). 1.
Wu, Y. et al. Dynamic gaussian mixture based deep generative model for robust forecasting on sparse multivariate time series. Proceedings of the AAAI Conference on Artificial Intelligence 35(1), 651–659 (2021).
Hamilton, W., Ying, Z. & Leskovec, J. Inductive representation learning on large graphs, in Advances in Neural Information Processing Systems, 30 (2017). 1.
Morris, C. et al. Weisfeiler and leman go neural: Higher-order graph neural networks, in Proceedings of the AAAI Conference on Artificial Intelligence, pp. 4602–4609 (2019). 1.
Veličković, P. et al. Graph attention networks, arXiv preprint arXiv:1710.10903 (2017). 1.
Xu, K., Hu, W., Leskovec, J. & Jegelka, S. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018). 1.
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986).
Zhao, C. et al. The reorganization of human brain networks modulated by driving mental fatigue. IEEE Journal of Biomedical and Health Informatics 21(3), 743–755 (2016).
Breiman, L. Random forests. Machine Learning 45, 5–32 (2001).
Cortes, C. Support-Vector Networks, Machine Learning (1995). 1.
Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). 1.
Ke, G. et al. Lightgbm: A highly efficient gradient boosting decision tree, in Advances in Neural Information Processing Systems, 30 (2017).
Acknowledgements
This study was supported by the Jilin Scientific and Technology Development Program (grant no. 20240101358JC). Thanks to Mengru Du, Qingyu Na, Yuanyuan Wang, Dianfei Zhao, Tianqi Mu, Jiapeng Wang, Xiangting Jiao, Yu Feng and ChangBin Zhu for their assistance with the data collection.
Author information
Author notes
These authors contributed equally: Yan Wu, Chunguang Tao.
Authors and Affiliations
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China
Yan Wu, Chunguang Tao & Qi Li 1.
Jilin Provincial International Joint Research Center of Brain Informatics and Intelligence Science, Changchun, 130022, China
Yan Wu & Qi Li 1.
Laboratory of Brain Information and Neural Rehabilitation Engineering, Zhongshan Research Institute, Changchun University of Science and Technology, Zhongshan, 528437, China
Yan Wu & Qi Li
Authors
- Yan Wu
- Chunguang Tao
- Qi Li
Contributions
Conceptualization, Wu.Y., Tao.C. and Li.Q.; methodology, Wu.Y. and Tao.C.; software, Wu.Y. and Li.Q.; validation, Wu.Y. and Tao.C.; formal analysis, Wu.Y. and Tao.C.; data curation, Wu.Y. and Tao.C.; writing—original draft preparation, Wu.Y.; writing—review and editing, Tao.C. and Li.Q.; supervision, Li.Q. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Correspondence to Qi Li.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wu, Y., Tao, C. & Li, Q. Fatigue Characterization Multimodal Datasets for Stereo Vision in Mixed Reality. Sci Data (2025). https://doi.org/10.1038/s41597-025-06474-8
Received: 02 January 2025
Accepted: 15 December 2025
Published: 27 December 2025
DOI: https://doi.org/10.1038/s41597-025-06474-8