- Data Descriptor
- Open access
- Published: 29 January 2026
Scientific Data , Article number: (2026) Cite this article
We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.
Abstract
In orthognathic surgery, accurate segmentation of the pterygopalatine and mandibular canals in maxillofacial cone beam computed tomography (CBCT) scans is crucial. It provides critical information to prevent nerve damage during surgery and sig…
- Data Descriptor
- Open access
- Published: 29 January 2026
Scientific Data , Article number: (2026) Cite this article
We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.
Abstract
In orthognathic surgery, accurate segmentation of the pterygopalatine and mandibular canals in maxillofacial cone beam computed tomography (CBCT) scans is crucial. It provides critical information to prevent nerve damage during surgery and significantly reduces the risk of surgical complications. However, the high cost of data collection, strict patient privacy protection, and ethical constraints have hampered the performance of existing deep learning methods for pterygopalatine and mandibular canals segmentation, limiting their practical applicability in clinical settings. To address this challenge and advance the development of pterygopalatine and mandibular canal segmentation techniques in maxillofacial CBCT scans, we carefully constructed and made publicly available a large dataset for pterygopalatine and mandibular canal segmentation in maxillofacial CBCT scans. This dataset includes 191 patient cases and comprehensively covers the key anatomical structures of the maxillary pterygopalatine canal and the mandibular canal, both of which are crucial in orthognathic surgery. Notably, this dataset is the first to include data on the maxillary pterygopalatine canal, filling a significant gap in this field. The release of this dataset will greatly accelerate the development of deep learning-based segmentation methods, provide clinicians with more accurate reconstruction tools, and ultimately improve the safety and efficiency of surgical procedures.
Data availability
The PMCanalSeg dataset is available at https://doi.org/10.7910/DVN/RTIGTP.
Code availability
All the code used for dataset loading, data analysis, and experiments in this article is found in the GitHub repository at https://github.com/lgh010319/PMCSeg. When analyzing the data, the evaluation metrics of the experiment should be followed, and we encourage sharing of code and models to promote reproducibility.
References
Frohberg, U. & Tiner, B. D. Surgical correction of facial deformities in a patient with cleidocranial dysplasia. Journal of Craniofacial Surgery 6(1), 49 (1995).
Shaoping, L., Weizhong, L., Guangzhou, H. N. The Use of Orthognathic Surgery in the Treatment of Dentomaxillofacial Deformities Following Maxillofacial Fractures, STOMATOLOGY, (2000). 1.
Sousa, C. S. & Turrini, R. N. T. Complications in orthognathic surgery: A comprehensive review. Journal of Oral and Maxillofacial Surgery, Medicine, and Pathology 24(2), 67–74, https://doi.org/10.1016/j.ajoms.2012.01.014 (2012).
Loureiro, R. M. Postoperative CBCT findings of orthognathic surgery and its complications: A guide for radiologists. Journal of Neuroradiology 49(1), 17–32, https://doi.org/10.1016/j.neurad.2021.04.033 (2022).
Moran, M. A Lightweight Segmentation Method for Mandibular Canal Based on Arch Shape and Hough Transform, Proceedings of 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI), pp. 285-291, https://doi.org/10.1109/ICHI57859.2023.00046 (2023). 1.
Patil, D. D. & Deore, S. G. Medical image segmentation: a review. International Journal of Computer Science and Mobile Computing 2(1), 22–27 (2013).
Seo, H. et al. Machine learning techniques for biomedical image segmentation: an overview of technical aspects and introduction to state-of-art applications. Medical Physics 47(5), e148–e167 (2020).
Castiglioni, I. et al. AI applications to medical images: From machine learning to deep learning. Physica Medica 83(1), 9–24, https://doi.org/10.1016/j.ejmp.2021.02.006 (2021).
Razzak, M. I., Naz, S. & Zaib, A. Deep Learning for Medical Image Processing: Overview, Challenges and the Future, in Classification in BioApps: Automation of Decision Making, Springer International Publishing, pp. 323-350, (2018). 1.
Ronneberger, O., Fischer, P., & Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation, in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Springer International Publishing, pp. 234-241 (2015). 1.
Hatamizadeh, A. et al.Unetr: Transformers for 3d medical image segmentation, in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574-584, (2022). 1.
Hatamizadeh, A. et al., Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, A. Crimi and S. Bakas, Eds., vol. 12962, Lecture Notes in Computer Science, Springer, Cham, 2022, pp. 239-250, https://doi.org/10.1007/978-3-031-08999-2_22. 1.
Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization, in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, pp. 311-320, Springer (2019). 1.
Murray, S. M. et al. MONAI: A Consortium-Based, Open-Source Framework for Deep Learning in Healthcare, arXiv preprint arXiv:2109.02784 (2021). 1.
Abou Baker, N., Rohrschneider, D. & Handmann, U. Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks. Machine Learning and Knowledge Extraction 6(4), 2783–2807, https://doi.org/10.3390/make6040133 (2024).
Blagec, K., Kraiger, J., Frühwirt, W. & Samwald, M. Benchmark datasets driving artificial intelligence development fail to capture the needs of medical professionals. Journal of Biomedical Informatics 137, 104274 (2023).
Machado, G. L. CBCT imaging–A boon to orthodontics. The Saudi Dental Journal 27(1), 12–21 (2015).
De Vos, W., Casselman, J. & Swennen, G. R. J. Cone-beam computerized tomography (CBCT) imaging of the oral and maxillofacial region: a systematic review of the literature. International Journal of Oral and Maxillofacial Surgery 38(6), 609–625 (2009).
Ahmad, M., Jenny, J. & Downie, M. Application of cone beam computed tomography in oral and maxillofacial surgery. Australian Dental Journal 57, 82–94 (2012).
Lowekamp, B. C., Chen, D. T., Ibáñez, L. & Blezek, D. The design of SimpleITK. Frontiers in Neuroinformatics 7, 45 (2013).
Fedorov, A. et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic Resonance Imaging 30(9), 1323–1341 (2012).
Li, G. and Lu, Y. et al. PMCanalSeg: A dataset for automatic segmentation of the pterygopalatine and mandibular canals from 3D CBCT images, Harvard Dataverse, https://doi.org/10.7910/DVN/RTIGTP (2025). 1.
Bolelli, F. et al. Segmenting Maxillofacial Structures in CBCT Volume, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Nashville, Tennessee, USA, Mar. 2025, pp. 1–10. 1.
Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J. & Maier-Hein, K. H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods 18(2), 203–211 (2021).
Zhou, H.-Y. et al. nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer, IEEE Transactions on Image Processing, (2023). 1.
Chen, J. et al. TransUNet: Transformers make strong encoders for medical image segmentation, arXiv preprint arXiv:2102.04306, (2021). 1.
Abdallah Edrees, M. F., Moustafa Attia, A., Abd Elsattar, M. F., Fahmy Gobran, H. G. & Ismail Ahmed, A. Course and topographic relationships of mandibular canal: A cone beam computed tomography study. Int J Dentistry Oral Sci 4(3), 444–449 (2017).
Lim, H.-K., Jung, S.-K., Kim, S.-H., Cho, Y. & Song, I.-S. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 21, 1–9 (2021).
Cipriano, M. et al. Deep Segmentation of the Mandibular Canal: a New 3D Annotated Dataset of CBCT Volumes, in IEEE Access, vol. 10, pp. 11500–11510, (2022). 1.
Cipriano, M. et al. Deep segmentation of the mandibular canal: a new 3D annotated dataset of CBCT volumes. IEEE Access 10, 11500–11510 (2022).
Bolelli, F. et al. Segmenting the Inferior Alveolar Canal in CBCTs Volumes: the ToothFairy Challenge, IEEE Transactions on Medical Imaging, https://doi.org/10.1109/TMI.2024.3523096. Source code: https://github.com/AImageLab-zip/ToothFairy.
Acknowledgements
This work was supported by the Fund of Baiqiu En Program of Jilin University (No. 2025B15), PR China; the Fund of Scientific and Technological Capabilities Promotion Project of Health Commision of Jilin Province (No. 2025WS-KA015), PR China; the Fund of Medical Specialized Concept Validation Project of Jilin University (No. 24GNYZ41), PR China; National Natural Science Foundation of China (No. 62202199), PR China; Science and Technology Development Plan of Jilin Province (No. 20230101071JC), PR China.
Author information
Authors and Affiliations
College of Computer Science and Technology, Jilin University, Changchun, 130012, China
Guohui Li & Yinan Lu 1.
Department of Oral, Plastic and Aesthetic Surgery, Hospital of Stomatology, Jilin University, Changchun, 130021, Jilin Province, China
Guomin Wu & Lin Wang 1.
School of Artificial Intelligence, Jilin University, Changchun, 130012, Jilin Province, China
Rui Ma 1.
Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, Changchun, 130012, Jilin Province, China
Rui Ma
Authors
- Guohui Li
- Yinan Lu
- Guomin Wu
- Lin Wang
- Rui Ma
Contributions
G.L. and L.W. contributed to data curation, methodology development, software implementation, original draft preparation, and manuscript revision. G.W. provided data oversight, project supervision, and critical revisions. Y.L. and R.M. led manuscript refinement, funding acquisition, project oversight, and administrative coordination. All authors participated in manuscript review and approved the final version.
Corresponding authors
Correspondence to Lin Wang or Rui Ma.
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Li, G., Lu, Y., Wu, G. et al. PMCanalSeg: A dataset for automatic segmentation of the pterygopalatine and mandibular canals from 3D CBCT images. Sci Data (2026). https://doi.org/10.1038/s41597-026-06620-w
Received: 17 April 2025
Accepted: 13 January 2026
Published: 29 January 2026
DOI: https://doi.org/10.1038/s41597-026-06620-w