The Detection of Pulp Stones with Automatic Deep Learning in Panoramic Radiographies: An AI Pilot Study (original) (raw)
Related papers
Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs
Diagnostics, 2020
Periapical radiolucencies, which can be detected on panoramic radiographs, are one of the most common radiographic findings in dentistry and have a differential diagnosis including infections, granuloma, cysts and tumors. In this study, we seek to investigate the ability with which 24 oral and maxillofacial (OMF) surgeons assess the presence of periapical lucencies on panoramic radiographs, and we compare these findings to the performance of a predictive deep learning algorithm that we have developed using a curated data set of 2902 de-identified panoramic radiographs. The mean diagnostic positive predictive value (PPV) of OMF surgeons based on their assessment of panoramic radiographic images was 0.69 (±0.13), indicating that dentists on average falsely diagnose 31% of cases as radiolucencies. However, the mean diagnostic true positive rate (TPR) was 0.51 (±0.14), indicating that on average 49% of all radiolucencies were missed. We demonstrate that the deep learning algorithm achie...
Research Square (Research Square), 2021
The objective of this study is to assess the diagnostic accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the detection of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped OPG(s). The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.87, a speci city of 0.86 and an AUC of 0.90 for the detection of carious lesions of third molars on OPG(s). A high diagnostic accuracy was achieved in caries detection in third molars based on the MobileNet V2 algorithm as presented. This is bene cial for the further development of a deep-learning based automated third molar removal assessment in future.
Detection of Periapical Lesions on Panoramic Radiographs Using Deep Learning
Applied Sciences
Dentists could fail to notice periapical lesions (PLs) while examining panoramic radiographs. Accordingly, this study aimed to develop an artificial intelligence (AI) designed to address this problem. Materials and methods: a total of 18618 periapical root areas (PRA) on 713 panoramic radiographs were annotated and classified as having or not having PLs. An AI model consisting of two convolutional neural networks (CNNs), a detector and a classifier, was trained on the images. The detector localized PRAs using a bounding-box-based object detection model, while the classifier classified the extracted PRAs as PL or not-PL using a fine-tuned CNN. The classifier was trained and validated on a balanced subset of the original dataset that included 3249 PRAs, and tested on 707 PRAs. Results: the detector achieved an average precision of 74.95%, while the classifier accuracy, sensitivity and specificity were 84%, 81% and 86%, respectively. When integrating both detection and classification m...
Classification of caries in third molars on panoramic radiographs using deep learning
Scientific Reports
The objective of this study is to assess the classification accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the classification of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped PR(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.86, a specificity of 0.88 and an AUC of 0.90 for the classification of carious lesions of third molars on PR(s). A high accuracy was achieved in caries classification in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future.
Dentomaxillofacial Radiology, 2021
Objective: This study evaluated the use of a deep-learning approach for automated detection and numbering of deciduous teeth in children as depicted on panoramic radiographs. Methods and materials: An artificial intelligence (AI) algorithm (CranioCatch, Eskisehir-Turkey) using Faster R-CNN Inception v2 (COCO) models were developed to automatically detect and number deciduous teeth as seen on pediatric panoramic radiographs. The algorithm was trained and tested on a total of 421 panoramic images. System performance was assessed using a confusion matrix. Results: The AI system was successful in detecting and numbering the deciduous teeth of children as depicted on panoramic radiographs. The sensitivity and precision rates were high. The estimated sensitivity, precision, and F1 score were 0.9804, 0.9571, and 0.9686, respectively. Conclusion: Deep-learning-based AI models are a promising tool for the automated charting of panoramic dental radiographs from children. In addition to servin...
This study aims to investigate the effect of using an artificial intelligence (AI) system (Diagnocat, Inc., San Francisco, USA) for caries detection, by comparing cone-beam computed tomography (CBCT) evaluation results with and without the software. 500 CBCT volumes are scored by three dentomaxillofacial radiologists for the presence of caries separately on a five-point confidence scale without and with the aid of the AI system. After visual evaluation, the deep convolutional neural network model generated a radiological report and observers scored again using AI interface. The ground truth was determined by a hybrid approach. Intra- and inter-observer agreements are evaluated with sensitivity, specificity, accuracy, and kappa statistics. 6008 surfaces are determined as ‘presence of caries’ and 13928 surfaces are determined as ‘absence of caries’ for ground truth. The area under the ROC curve of Observer 1, 2, and 3 are found to be 0.855/0.920, 0.863/0.917, and 0.747/0.903, respecti...
Diagnostic Charting on Panoramic Radiography Using Deep- Learning Artificial Intelligence System
2021
Aims of the Study: A radiographic examination is a significant part of the clinical routine for the diagnosis, management, and follow-up of the disease. Artificial intelligence in dentistry shows that the deep learning technique high enough quality and effective to diagnose and interpret the images in the dental practice. For this purpose, it is aimed to evaluate diagnostic charting on panoramic radiography using a deep-learning AI system in this study. Methods: 1084 anonymized dental panoramic radiographs were labeled for 10 different dental situations including crown, pontic, root-canal treated tooth, implant, implant-supported crown, impacted tooth, residual root, filling, caries, and dental calculus. AI Model (Craniocatch, Eskisehir, Turkey) based on a deep CNN method was proposed. A Faster R-CNN Inception v2 (COCO) model implemented with Tensorflow library was used for model development. The training and validation data sets were used to predict and generate optimal CNN algorit...
Automated feature detection in dental periapical radiographs by using deep learning
Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology, 2021
The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)Àbased computer vision techniques. Study Design. Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into "Training and Validation" and "Test" data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. Results. The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was "U-Net+Densenet121" (mean intersection over union [mIoU] = 0.501; Dice coefficient = 0.569). Performance of all architectures degraded on the "Test" data set; "U-Net" delivered the best performance (mIoU = 0.402; Dice coefficient = 0.453). Interradicular radiolucencies were the most difficult to segment. Conclusions. DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort. (Oral Surg Oral Med Oral Pathol Oral Radiol 2020;000:1À10) The work was supported by the
Detection and Numbering of Teeth in Panoramic Radiographic Images Using Deep Neural Networks
Russian electronic journal of radiology, 2024
oday, with the advancement of the artificial intelligence methods, it is possible to automatically evaluate these images in order to save the clinician's time. Purpose. To employ Convolutional Neural Networks (CNNs) for tooth segmentation and numbering in panoramic radiographic images. The study utilized a dataset with ample volume and diversity, and employed cutting-edge deep learning algorithms for the task of tooth segmentation and numbering. Implementing and utilizing this method can enhance the efficiency of clinical diagnosis and treatment procedures. Materials and Methods. The data set includes 527 panoramic images that were selected from the archives of the Radiology Department of the Faculty of Dentistry of Tabriz. After that, the images were labeled by an oral and maxillofacial radiologist, according to the FDI numbering system. The segmentation was done by using the U-Net architecture and its output entered the VGG-16 network for numbering. Eighty percent of the data was used for network training and 10% for validation and another 10% for network testing. Results. The results obtained from the U-Net network for tooth segmentation, based on the original data; sensitivity, specificity, and Dice, are 98.9%, 98.4%, and 95.4%, respectively. Also, for teeth numbering by using the VGG-16 Network Architecture, we obtained sensitivity, specificity and accuracy equal to 98.58%, 99.93% and 96.8%, respectively. In the examination and diagnosis of the implant, the retained roots, and the extracted teeth, the accuracy of 98.45%, 97.1%, and 98.2% was obtained, respectively. Conclusion. The obtained results are favorable compared to similar studies, and in the future, with the development of these methods, it can be a useful help in the automatic analysis of panoramic images and other dental images.
Deep Learning for the Radiographic Detection of Apical Lesions
Journal of Endodontics, 2019
Introduction: We applied deep convolutional neural networks (CNNs) to detect apical lesions (ALs) on panoramic dental radiographs. Methods: Based on a synthesized data set of 2001 tooth segments from panoramic radiographs, a custom-made 7-layer deep neural network, parameterized by a total number of 4,299,651 weights, was trained and validated via 10 times repeated group shuffling. Hyperparameters were tuned using a grid search. Our reference test was the majority vote of 6 independent examiners who detected ALs on an ordinal scale (0, no AL; 1, widened periodontal ligament, uncertain AL; 2, clearly detectable lesion, certain AL). Metrics were the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive/negative predictive values. Subgroup analysis for tooth types was performed, and different margins of agreement of the reference test were applied (base case: 2; sensitivity analysis: 6). Results: The mean (standard deviation) tooth level prevalence of both uncertain and certain ALs was 0.16 (0.03) in the base case. The AUC of the CNN was 0.85 (0.04). Sensitivity and specificity were 0.65 (0.12) and 0.87 (0.04,) respectively. The resulting positive predictive value was 0.49 (0.10), and the negative predictive value was 0.93 (0.03). In molars, sensitivity was significantly higher than in other tooth types, whereas specificity was lower. When only certain ALs were assessed, the AUC was 0.89 (0.04). Increasing the margin of agreement to 6 significantly increased the AUC to 0.95 (0.02), mainly because the sensitivity increased to 0.74 (0.19). Conclusions: A moderately deep CNN trained on a limited amount of image data showed satisfying discriminatory ability to detect ALs on panoramic radiographs.