Nesta seção, fornecemos uma lista selecionada de conteúdo de vídeo semanal do nosso canal do YouTube, focando em publicações recentes, conceitos-chave e tendências em Inteligência Artificial aplicada à Odontologia. Cada entrada inclui metadados essenciais para ajudar os usuários a navegar rapidamente por nossa biblioteca de conteúdo educacional. Todos os vídeos são gratuitos e estão disponíveis publicamente.
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: Manual landmarking in 3D cephalometry is accurate, but it takes a lot of time , which limits its use in daily practice and research. This study, published in Scientific Reports (2021), proposed an automatic system for 3D cephalometric landmark detection using multi-stage deep reinforcement learning (DRL). The model simulates how experts sequentially identify landmarks and applies that logic to CT images. Tested on a dataset of 28 patients, the system reached a mean error of 1.96 ± 0.78 mm and high detection rates within 2.5 to 4 mm of accuracy. Unlike other methods, it does not require prior segmentation or 3D mesh reconstruction, which makes the process faster and more direct. DOI: https://doi.org/10.1016/j.dental.2025.01.005
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study analyzed 3D cone-beam CT images from 60 patients with unilateral cleft lip and palate (UCP) to measure asymmetry of the maxilla. Using a deep learning-based segmentation protocol with manual refinement, the researchers compared the cleft and non-cleft sides. The results showed that the cleft side had significant reductions in maxillary and alveolar volume, length, and height, while the anterior width was increased. Statistical analysis confirmed that defect dimensions were closely related to the variability of the cleft-side maxilla. In conclusion, the study identified hypoplasia in the pyriform aperture and alveolar crest areas, highlighting that defect structures contribute to maxillary asymmetry in UCP patients. DOI: https://doi.org/10.1111/ocr.12482
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This video explores one of the earliest deep learning studies focused on fully automated 3D tooth segmentation and labeling. The authors developed a two-level hierarchical CNN capable of separating teeth and gingiva from dental meshes with remarkable precision — 99.06% accuracy for upper models and 98.79% for lower ones. By combining boundary-aware mesh simplification, graph-based label optimization, and fuzzy refinement, the study presents a clinically useful pipeline for orthodontic CAD systems. It highlights how artificial intelligence can bring automation, precision, and efficiency to digital dentistry. DOI: 10.1109/TVCG.2018.2839685
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study investigates how panoramic radiographs can be used to predict dysphagia through quantitative radiographic features. Analyzing 77 patients who underwent both panoramic X-rays and videofluorographic swallowing studies, the researchers found that the vertical hyoid bone position was the key indicator associated with dysphagia risk. The study identified a clear cutoff level (AUC = 0.72) — when the hyoid bone lies below the mandibular border line, the likelihood of dysphagia increases significantly. These findings establish a foundation for future AI models designed to automatically assess swallowing risk and enhance patient safety in dental treatment, especially for elderly or frail populations. DOI: https://doi.org/10.3390/ijerph19084529
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study explores how AI-based feature extraction from dental panoramic radiographs can assist in the early detection of osteoporosis — a condition characterized by decreased bone density and fracture risk. Using 575 radiographs (267 osteoporotic and 308 normal), the researchers compared 13 types of image features, such as Gabor filters, Haar wavelets, and steerable filters, combined with SOM/LVQ and SVM models. The best-performing approach (SOM/LVQ with Gabor features) achieved 92.6% accuracy, 97.1% sensitivity, and 86.4% specificity, showing that texture and edge orientation in mandibular bone regions are reliable indicators of low bone density. The authors conclude that dental panoramic radiographs, routinely acquired in clinics, could become a low-cost AI-assisted screening tool for identifying early signs of osteoporosis — especially useful in populations where DEXA scans are not readily available. DOI: https://doi.org/10.1016/j.cmpb.2019.105301
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This IEEE 2018 study introduces a CNN-based auto-positioning system that detects and corrects patient dental-arch misalignment during rotational panoramic imaging. Using simulated datasets of 5,166 panoramic reconstructions, the algorithm estimates forward–backward deviations within ±20 mm and reconstructs sharper DPRs with minimized anterior blur. Four CNN models (13–15 layers) achieved mean error < 0.5 mm, showing that AI can automatically reposition the dental arch for clearer diagnostic images and reduce the need for retakes. DOI: https://doi.org/10.1109/embc.2018.8512732
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study, published in BMC Oral Health (2022), presents a comprehensive artificial intelligence framework for automatic tooth detection, numbering, and diagnostic charting using panoramic, periapical, and bitewing radiographs. The proposed deep learning system uses segmentation models (U-Net and U-Net + ResNet-34) to extract tooth, bone, and CEJ masks, assign FDI tooth numbers via multi-scale matching, and arrange full-mouth series (FMS) templates. It achieved a precision and recall of 0.96 (panoramic match) and 0.87 (repository match), outperforming other state-of-the-art models. Importantly, the framework integrates additional diagnostic modules — periodontal bone loss and caries detection — enabling the generation of clinical reports with numbered teeth, which facilitates communication, documentation, and treatment planning. DOI: https://doi.org/10.1186/s12903-022-02514-6
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Tsinghua University and Peking University School and Hospital of Stomatology proposed a relation-based framework for automated teeth recognition in periapical radiographs. Using 1,250 periapical X-rays with 4,336 labeled teeth, the researchers developed a multi-task CNN integrating a Label Reconstruction technique, a Proposal Correlation Module, and a Teeth Sequence Refinement Module to improve both classification and localization.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2023 retrospective study from Marmara University used clinical intraoral photographs (137 images: 65 healthy, 72 OLP) to train a Google Inception V3 deep-learning model. Using the CranioCatch platform, histopathologically confirmed data yielded 100 % accuracy in classifying normal vs OLP mucosa, demonstrating that AI can analyze standard photographs without radiation to assist clinicians in diagnosing mucosal diseases.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study proposes a CNN-based system using transfer learning to detect apical lesions on periapical radiographs. Using adaptive thresholding, advanced image enhancement, and multiple CNN architectures, the model achieved up to 96.21% accuracy with AlexNet. The approach improves clinical decision support and reduces diagnostic workload.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study evaluates three deep learning semantic segmentation algorithms, 3DiscNet, U‑Net, and SegNet‑Basic, for automatic segmentation of the TMJ articular disc on MRI. Using 217 MR images, 3DiscNet and SegNet‑Basic achieved superior Dice, sensitivity and PPV, demonstrating the potential of deep learning to support TMD diagnosis.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Kastamonu University, Karabuk University, and Ordu University (Turkey) proposed an enhanced Mask R-CNN–based system for automatic tooth segmentation and numbering on bitewing radiographs, using the FDI notation. A total of 1,200 bitewing X-rays were annotated by oral radiologists and divided into 1,000 for training and 200 for testing. The network used ResNet-101 + FPN as backbone and achieved outstanding performance after 400 epochs: - Segmentation: 100 % precision, 97.49 % mAP, and 97.36 % F1-score. - Tooth numbering: 94.35 % precision, 91.51 % mAP, and 93.33 % F1-score. The framework also compared 12 deep learning architectures (ResNet, DenseNet, MobileNet, GoogleNet, HarDNet, etc.), confirming Mask R-CNN as the top performer for both segmentation and numbering. This research demonstrates how AI can support radiographic analysis, enabling accurate identification and classification of individual teeth using standardized FDI numbering.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2021 study from Yonsei University College of Dentistry (Seoul, Korea) proposed a fully deep learning model for automatic detection of cephalometric landmarks on lateral cephalometric radiographs. A dataset of 950 clinical images was used, annotated by two orthodontists (15 and 5 years of experience) after calibration. The framework integrates two CNN modules: 1. ROI Machine – locates 13 regions of interest for each landmark. 2. Detection Machine – performs coordinate prediction via CNN with 8 convolution, 5 pooling and 2 fully connected layers (ELU activation). Using 800 training, 100 validation, and 50 test images, the model achieved: - Mean Radial Error (MRE): 1.84 mm - Successful Detection Rate (SDR): 36.1 % within expert variability - Inter-examiner agreement: ICC ~ 0.99 The proposed system reached accuracy comparable to orthodontic experts and even surpassed human performance for certain landmarks (A-point, UIB, LIB, ANS). This marks a significant step toward fully automated, expert-level cephalometric analysis. 🔗 DOI: https://doi.org/10.5624/isd.20210077
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2023 study from the Harvard-MIT Division of Health Sciences and Technology and Massachusetts General Hospital developed a multimodal deep learning approach to quantify tongue muscle strain during speech. By integrating cine-MRI and diffusion tensor imaging (DTI), researchers captured both dynamic motion and muscle fiber orientation to reconstruct 3D strain maps over time. A deep learning pipeline estimated strain tensors and visualized directional deformation during phoneme articulation (/i/, /s/, /l/). Results - Correlation between predicted and measured strain: 0.93 - Temporal resolution: 25 fps - Processing time: < 5 min per subject (GPU accelerated) - Strain hotspots: genioglossus and superior longitudinal muscles This multimodal MRI + AI framework provides new insights into tongue biomechanics, helping clinicians understand how muscle activation patterns shape speech. It opens possibilities for improved diagnosis and rehabilitation in dysarthria and post-surgical speech recovery. 🔗 DOI: https://doi.org/10.1044/2023_JSLHR-23-00123
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Yonsei University College of Dentistry and The Catholic University of Korea evaluated how well deep learning models could classify ultrasound images of facial regions, an essential step for AI-guided aesthetic and diagnostic procedures. A total of 1,440 ultrasound images from 86 healthy volunteers were acquired using an E-CUBE 15 Platinum (ALPINION) device. Fifteen pre-trained CNN architectures (including VGG-16/19, ResNet-50/101, DenseNet-201, Inception-v3, and MobileNet-v2) were tested to classify nine facial regions such as the forehead, nose, cheeks, and periorbital areas, using both augmented and non-augmented datasets. Results - Best performing model: VGG-16 with 96.9% accuracy (F1 = 96.7%) - Mean performance across all models: 94% accuracy ± 1.5 - Data augmentation had minimal impact (+0.2% improvement) - Key features learned: contours of skin and bone, not muscles or vessels - BRISQUE analysis confirmed ultrasound quality strongly influenced model performance - LIME visualization revealed explainable areas of focus in each model Conclusion The study demonstrates that classical CNNs (like VGGs) outperform modern, deeper networks in ultrasound classification tasks, due to their ability to extract stable low-level features. This work establishes a valuable benchmark for future AI-based facial ultrasonography applications in both cosmetic and clinical settings. 🔗 DOI: https://doi.org/10.1038/s41598-022-20969-z
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Nagasaki University (Japan) compared two artificial intelligence approaches — the Sliding Window Method (SWM) and Mask R-CNN — for detecting cell nuclei in oral cytological images. Early detection of oral cancer depends on recognizing nuclear atypia and changes in the nucleus-to-cytoplasm ratio (N/C). Using Papanicolaou-stained liquid-based cytology slides, the researchers analyzed: - SWM: 1,576 cropped tiles (96×96 px) for nucleus vs non-nucleus classification. - Mask R-CNN: 130 annotated images (Class II–III lesions) for instance segmentation of nuclei. Images were captured at 40× magnification (1280×1024 px) using a Nikon Eclipse Ti-S microscope. Results - SWM: 93.1 % accuracy after 20 epochs (No.2 CNN model). - Mask R-CNN: Detected 37 nuclei with only 1 false positive (error rate = 0.027). - Loss decreased from 0.89 → 0.45 over 40 epochs. - Mask R-CNN outperformed SWM by reducing false detections and accurately delineating nuclear contours. Conclusion The study demonstrated that Mask R-CNN is superior to SWM for recognizing blue-stained nuclei in oral cytology. This approach represents a crucial step toward AI-assisted screening for early oral cancer detection, enabling faster and more reliable cytological evaluation. 🔗 DOI: https://doi.org/10.1186/s13000-022-01245-0
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study, published in Imaging Science in Dentistry (2020), evaluates the clinical efficacy of a fully automated bone age assessment (BAA) system based on the Tanner–Whitehouse 3 (TW3) method. Developed by Seoul National University, the system uses deep neural networks (Faster R-CNN and VGGNet) to automatically detect and classify the 13 skeletal maturity regions in hand–wrist radiographs. In a dataset of 80 children and adolescents (ages 7–15), the AI achieved excellent correlation (R² = 0.95) with radiologist evaluations, showing no significant differences (p > 0.05). This TW3-based approach demonstrates that AI can reliably and efficiently estimate bone age, supporting orthodontic planning and pediatric growth assessment. 🔬 Key Takeaway: AI models can perform bone age estimation with accuracy comparable to experienced radiologists, improving consistency and workflow efficiency in clinical settings. 📚 DOI: https://doi.org/10.5624/isd.2020.50.3.237
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study, published in *Annals of Plastic Surgery (2021)*, introduces a machine learning model that automatically evaluates 3D facial symmetry before and after orthognathic surgery. Using transfer learning and 3D contour line features from facial scans, the AI system achieved a 21% improvement in symmetry after surgery and provided objective, reproducible results. This web-based tool enhances clinical decision-making and doctor–patient communication by visualizing postoperative symmetry outcomes.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study from KU Leuven and Karolinska Institutet presents a two-route deep learning framework for detecting and differentiating radicular cysts and periapical granulomas on panoramic radiographs. Differentiating these two common apical lesions is clinically critical: cysts require surgical enucleation, while granulomas are usually treated by root canal therapy. A total of 249 panoramic images were used, including 80 cysts, 72 granulomas, 197 normal controls, and 58 other radiolucent lesions to ensure robustness. The framework combines: - MobileNet V2 (53 layers) — dual-input classification (global + local lesion images) - YOLO v3 (53-layer Darknet) — localization network for lesion bounding boxes - Training used 90 % of the data with augmentation and 10-fold cross-validation and 10% for testing
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: Published in EClinicalMedicine (2020), this large multicenter Chinese study developed a cascaded CNN model (SSD + DenseNet121) to detect oral squamous cell carcinoma (OCSCC) using 4,244 from several hospitals. The algorithm achieved AUC = 0.983, accuracy > 91 %, and AUC = 0.995 for early-stage lesions, performing at expert specialist level. A smartphone-based app prototype was built for real-time screening — a milestone toward accessible, AI-driven oral cancer diagnosis. DOI: https://doi.org/10.1016/j.eclinm.2020.100558
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This nationwide multi-centre study developed and validated a cascade CNN model for automatic identification of 20 cephalometric landmarks on lateral cephalograms. Using RetinaNet for region detection and U-Net for precise landmark localization, the system achieved a mean error of 1.36 ± 0.98 mm, comparable to experienced orthodontists. Performance was evaluated across sensors, hospitals and machine vendors, demonstrating high generalizability.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study presents a deep learning-based approach for dental implant planning using Artificial intelligence that is increasingly being integrated into clinical dentistry, and one of its most promising applications is implant planning using three-dimensional cone-beam computed tomography (CBCT). This study evaluated the performance of an AI system (Diagnocat, Inc., San Francisco, USA) compared with manual analysis by human experts using InvivoDental 6.0 (Anatomage, San Jose, USA). A total of 75 CBCT scans and 508 implant regions were examined to measure bone height and thickness and to detect canals, sinuses/fossae, and missing tooth regions. The jaws were divided into anterior, premolar, and molar regions for both maxilla and mandible, and comparisons between AI and manual assessments were performed using Bland–Altman analysis and the Wilcoxon signed rank test. Results showed that AI measurements of bone height closely matched manual assessments in several regions, while statistically significant differences were found in bone thickness across both jaws. Detection accuracy reached 72.2% for canals, 66.4% for sinuses/fossae, and 95.3% for missing teeth. These findings demonstrate that AI-based systems can achieve strong performance in implant planning and support clinicians by improving efficiency, reproducibility, and diagnostic confidence in dental implantology.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study introduced a deep learning method for automatic detection and numbering of teeth on dental periapical radiographs using a Faster R-CNN model with TensorFlow. The dataset included 1,250 periapical images annotated following the FDI numbering system (ISO-3950). The network achieved precision and recall above 90% and a mean IOU of 0.91 when compared to expert annotations. Post-processing steps—including overlap filtering, rule-based correction, and prediction of missing teeth—further improved results, achieving performance comparable to that of a junior dentist. The model incorporated prior dental domain knowledge, significantly enhancing tooth numbering accuracy.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study evaluated the feasibility of YOLOv3 for automatic caries detection and ICCMS™-based radiographic classification on bitewing radiographs under two IoU thresholds (0.50 and 0.75). A total of 994 annotated images were used for training, 256 for validation, and 175 for testing. Performance was reported across binary (non-carious vs carious), 4-class (0/RA/RB/RC), and 7-class (0, RA1–RA3, RB4, RC5–RC6) settings. YOLOv3 achieved acceptable results at both IoU50 and IoU75; metrics decreased for the 7-class task. The model localized and classified advanced caries (RC6) well, but struggled with initial enamel lesions (RA1).
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: Study investigating weakly supervised deep learning methods for predicting malignant transformation in oral epithelial dysplasia (OED) using whole-slide histology images (WSIs). A cohort of 163 WSIs (137 OED cases, 50 with malignant transformation) was analysed. A weakly supervised pipeline using IDaRS (Iterative Draw-and-Rank Sampling) with ResNet-34 achieved the highest performance (AUROC = 0.78; F1 = 0.69). Hotspot analysis identified peri-epithelial lymphocyte (PEL) count, epithelium layer nuclei count, and basal layer nuclei count as significant predictors. Survival analysis showed that PELs and epithelial/basal layer nuclear features improve prognostic stratification. The study demonstrates that deep learning can predict malignant transformation and progression-free survival, offering potential support for clinical risk assessment.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study compares automatic deep learning‑based tooth segmentation (DGCNN) with two commercial CAD/CAM segmentation tools (OrthoAnalyzer and Autolign). Using 516 training models and 30 evaluation models, the DGCNN approach achieved high segmentation accuracy and the fastest processing time, demonstrating strong potential for orthodontic diagnostics and appliance manufacturing.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study presents a convolutional neural network (CNN) developed for automatic tooth numbering in panoramic radiographs using the FDI system. A total of 8,000 anonymized images were collected from Asisa Dental S.A.U. centers in Madrid (Spain), curated by two experienced dentists. The model combines Matterport Mask R-CNN for object detection and ResNet-101 for classification, leveraging transfer learning from a previous model that achieved 99.24% accuracy in tooth detection. The network was trained on 1,217 curated images (after filtering and quality control). Training involved 53 runs with 60–300 epochs each, varying learning rates between 0.0014 and 0.012. The final model achieved: Accuracy = 93.83% (total loss 6.17%), Tooth detection = 99.24% accuracy, and Tooth numbering = 93.83% accuracy. It correctly identified missing, filled, and metallic teeth in most clinical cases, though occasional numbering errors occurred for pontics and third molars. The authors conclude that the model is reliable enough for use in clinical environments and demonstrates strong potential for automated diagnostic support.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study addresses a key challenge in periodontal diagnostics: accurately estimating the clinical attachment level (CAL) from intraoral radiographs. Although deep learning models can predict CAL bitewing radiographs have a limited field of view preventing CNNs from analyzing anatomical structures that lie outside the captured region. To overcome this limitation the authors developed a generative adversarial inpainting network using partial convolutions to reconstruct missing anatomy and provide additional contextual information for CAL prediction. A large retrospective dataset was used including 80326 images for training 12901 for validation and 10687 for direct comparison between inpainted and non-inpainted methods. Statistical analyses (MBE MAE Dunn’s pairwise test) demonstrated that the inpainting approach significantly improved prediction performance. The MAE decreased from 1.50 mm to 1.04 mm and all pairwise comparisons confirmed superior accuracy for the inpainted models. The study concludes that GAN-based inpainting enhances CAL prediction from bitewing and periapical radiographs and achieves accuracy within the clinically acceptable 1 mm threshold. Clinically this work highlights how AI can compensate for inherent radiographic limitations offering more reliable assessments even when anatomy falls outside the imaging field.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study uses CT images from 66 patients who underwent oral and maxillofacial sur gery (OMS) were landmarked manually in MIMICS. Then the CT slices were exported as images for recreating the 3D volume. The coordinate data of landmarks were further processed in Matlab using a principal component analysis (PCA) method. A patch-based deep neural network model with a three-layer convolutional neural net work (CNN) was trained to obtain landmarks from CT images. The evaluating experiment showed that this CNN model could automatically finish landmarking in an average processing time of 37.871 seconds with an average accuracy of 5.785 mm. It shows a promising potential to relieve the workload of the surgeon and reduces the dependence on human experience for OMS landmarking.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study introduces a novel AI-based pipeline designed not primarily to perform periodontal ultrasound segmentation, but rather to *improve dataset quality* by automatically identifying inadequate or low-quality annotations. Periodontal ultrasonography is an emerging imaging modality capable of visualizing gingival and periodontal soft tissues without ionizing radiation. However, manual segmentation of structures such as the gingival margin, alveolar crest, sulcus, and cemento-enamel junction is highly operator-dependent and prone to inconsistency. These inconsistencies significantly reduce the reliability of datasets used for machine learning model development. The authors developed a deep learning segmentation system using a U‑Net–based architecture trained to detect four periodontal structures. A dataset of 704 ultrasound images was annotated by trained clinicians and divided into 80% training and 20% testing. Importantly, two training strategies were evaluated: (1) a model trained on all annotations, including low-quality ones, and (2) a model trained only on high-quality, curated annotations. The objective was to investigate whether AI could help identify erroneous annotations by exhibiting poor segmentation performance on those cases. The model trained on curated annotations performed substantially better, with Dice scores improving across all four structures. Visual inspection of segmentation failures revealed that the model consistently struggled on images with noisy or incorrect human labels—allowing the system to act as an automated “annotation auditor.” This demonstrates a new conceptual approach: using AI not only for segmentation, but also as a quality control mechanism to enhance dataset reliability before downstream model training. By establishing a quantifiable and automated method for detecting weak annotations, the study provides an important contribution to the emerging field of periodontal ultrasound AI. It highlights the need for rigorous dataset curation and suggests that machine learning can play a central role in improving annotation consistency and accelerating clinical translation of periodontal ultrasound imaging.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study introduces AggregateNet, an innovative deep learning model designed for the automated classification of cervical vertebrae maturation (CVM) stages using lateral cephalometric radiographs. A total of 1,018 lateral cephalometrics radiographs were labeled by experts according to CVM stages. The cervical vertebrae were automatically cropped using an object detector, and the model was trained with two inputs:nthe cropped vertebral images, and the patient’s age, which was concatenated with the extracted image feature vector. AggregateNet employs a parallel-structured CNN architecture combined with a pre-processing layer of tunable directional edge-enhancement filters, designed to emphasize morphological contours relevant to CVM assessment. Data augmentation was applied to reduce overfitting, especially because the dataset was separated by gender for improved model fitting. The model’s performance was compared with several well-known architectures—ResNet20, Xception, MobileNetV2, and a custom CNN with directional filters. AggregateNet achieved the highest validation accuracy, reaching 82.35% for female subjects and 75.0% for male subjects. Removing the directional filters leads to a clear drop in performance, highlighting their importance. Overall, the study demonstrates that AggregateNet, combined with directional edge filters, provides superior accuracy for fully automated CVM stage classification, representing a meaningful advancement for AI applications in orthodontics and growth assessment. DOI: https://doi.org/10.1111/ocr.12644
Keywords: