Nesta seção, fornecemos uma lista selecionada de conteúdo de vídeo semanal do nosso canal do YouTube, focando em publicações recentes, conceitos-chave e tendências em Inteligência Artificial aplicada à Odontologia. Cada entrada inclui metadados essenciais para ajudar os usuários a navegar rapidamente por nossa biblioteca de conteúdo educacional. Todos os vídeos são gratuitos e estão disponíveis publicamente.
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2021 study from Yonsei University College of Dentistry (Seoul, Korea) proposed a fully deep learning model for automatic detection of cephalometric landmarks on lateral cephalometric radiographs. A dataset of 950 clinical images was used, annotated by two orthodontists (15 and 5 years of experience) after calibration. The framework integrates two CNN modules: 1. ROI Machine – locates 13 regions of interest for each landmark. 2. Detection Machine – performs coordinate prediction via CNN with 8 convolution, 5 pooling and 2 fully connected layers (ELU activation). Using 800 training, 100 validation, and 50 test images, the model achieved: - Mean Radial Error (MRE): 1.84 mm - Successful Detection Rate (SDR): 36.1 % within expert variability - Inter-examiner agreement: ICC ~ 0.99 The proposed system reached accuracy comparable to orthodontic experts and even surpassed human performance for certain landmarks (A-point, UIB, LIB, ANS). This marks a significant step toward fully automated, expert-level cephalometric analysis. 🔗 DOI: https://doi.org/10.5624/isd.20210077
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2023 study from the Harvard-MIT Division of Health Sciences and Technology and Massachusetts General Hospital developed a multimodal deep learning approach to quantify tongue muscle strain during speech. By integrating cine-MRI and diffusion tensor imaging (DTI), researchers captured both dynamic motion and muscle fiber orientation to reconstruct 3D strain maps over time. A deep learning pipeline estimated strain tensors and visualized directional deformation during phoneme articulation (/i/, /s/, /l/). Results - Correlation between predicted and measured strain: 0.93 - Temporal resolution: 25 fps - Processing time: < 5 min per subject (GPU accelerated) - Strain hotspots: genioglossus and superior longitudinal muscles This multimodal MRI + AI framework provides new insights into tongue biomechanics, helping clinicians understand how muscle activation patterns shape speech. It opens possibilities for improved diagnosis and rehabilitation in dysarthria and post-surgical speech recovery. 🔗 DOI: https://doi.org/10.1044/2023_JSLHR-23-00123
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Yonsei University College of Dentistry and The Catholic University of Korea evaluated how well deep learning models could classify ultrasound images of facial regions, an essential step for AI-guided aesthetic and diagnostic procedures. A total of 1,440 ultrasound images from 86 healthy volunteers were acquired using an E-CUBE 15 Platinum (ALPINION) device. Fifteen pre-trained CNN architectures (including VGG-16/19, ResNet-50/101, DenseNet-201, Inception-v3, and MobileNet-v2) were tested to classify nine facial regions such as the forehead, nose, cheeks, and periorbital areas, using both augmented and non-augmented datasets. Results - Best performing model: VGG-16 with 96.9% accuracy (F1 = 96.7%) - Mean performance across all models: 94% accuracy ± 1.5 - Data augmentation had minimal impact (+0.2% improvement) - Key features learned: contours of skin and bone, not muscles or vessels - BRISQUE analysis confirmed ultrasound quality strongly influenced model performance - LIME visualization revealed explainable areas of focus in each model Conclusion The study demonstrates that classical CNNs (like VGGs) outperform modern, deeper networks in ultrasound classification tasks, due to their ability to extract stable low-level features. This work establishes a valuable benchmark for future AI-based facial ultrasonography applications in both cosmetic and clinical settings. 🔗 DOI: https://doi.org/10.1038/s41598-022-20969-z
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This 2022 study from Nagasaki University (Japan) compared two artificial intelligence approaches — the Sliding Window Method (SWM) and Mask R-CNN — for detecting cell nuclei in oral cytological images. Early detection of oral cancer depends on recognizing nuclear atypia and changes in the nucleus-to-cytoplasm ratio (N/C). Using Papanicolaou-stained liquid-based cytology slides, the researchers analyzed: - SWM: 1,576 cropped tiles (96×96 px) for nucleus vs non-nucleus classification. - Mask R-CNN: 130 annotated images (Class II–III lesions) for instance segmentation of nuclei. Images were captured at 40× magnification (1280×1024 px) using a Nikon Eclipse Ti-S microscope. Results - SWM: 93.1 % accuracy after 20 epochs (No.2 CNN model). - Mask R-CNN: Detected 37 nuclei with only 1 false positive (error rate = 0.027). - Loss decreased from 0.89 → 0.45 over 40 epochs. - Mask R-CNN outperformed SWM by reducing false detections and accurately delineating nuclear contours. Conclusion The study demonstrated that Mask R-CNN is superior to SWM for recognizing blue-stained nuclei in oral cytology. This approach represents a crucial step toward AI-assisted screening for early oral cancer detection, enabling faster and more reliable cytological evaluation. 🔗 DOI: https://doi.org/10.1186/s13000-022-01245-0
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study, published in Imaging Science in Dentistry (2020), evaluates the clinical efficacy of a fully automated bone age assessment (BAA) system based on the Tanner–Whitehouse 3 (TW3) method. Developed by Seoul National University, the system uses deep neural networks (Faster R-CNN and VGGNet) to automatically detect and classify the 13 skeletal maturity regions in hand–wrist radiographs. In a dataset of 80 children and adolescents (ages 7–15), the AI achieved excellent correlation (R² = 0.95) with radiologist evaluations, showing no significant differences (p > 0.05). This TW3-based approach demonstrates that AI can reliably and efficiently estimate bone age, supporting orthodontic planning and pediatric growth assessment. 🔬 Key Takeaway: AI models can perform bone age estimation with accuracy comparable to experienced radiologists, improving consistency and workflow efficiency in clinical settings. 📚 DOI: https://doi.org/10.5624/isd.2020.50.3.237
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study, published in *Annals of Plastic Surgery (2021)*, introduces a machine learning model that automatically evaluates 3D facial symmetry before and after orthognathic surgery. Using transfer learning and 3D contour line features from facial scans, the AI system achieved a 21% improvement in symmetry after surgery and provided objective, reproducible results. This web-based tool enhances clinical decision-making and doctor–patient communication by visualizing postoperative symmetry outcomes.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study from KU Leuven and Karolinska Institutet presents a two-route deep learning framework for detecting and differentiating radicular cysts and periapical granulomas on panoramic radiographs. Differentiating these two common apical lesions is clinically critical: cysts require surgical enucleation, while granulomas are usually treated by root canal therapy. A total of 249 panoramic images were used, including 80 cysts, 72 granulomas, 197 normal controls, and 58 other radiolucent lesions to ensure robustness. The framework combines: - MobileNet V2 (53 layers) — dual-input classification (global + local lesion images) - YOLO v3 (53-layer Darknet) — localization network for lesion bounding boxes - Training used 90 % of the data with augmentation and 10-fold cross-validation and 10% for testing
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: Published in EClinicalMedicine (2020), this large multicenter Chinese study developed a cascaded CNN model (SSD + DenseNet121) to detect oral squamous cell carcinoma (OCSCC) using 4,244 from several hospitals. The algorithm achieved AUC = 0.983, accuracy > 91 %, and AUC = 0.995 for early-stage lesions, performing at expert specialist level. A smartphone-based app prototype was built for real-time screening — a milestone toward accessible, AI-driven oral cancer diagnosis. DOI: https://doi.org/10.1016/j.eclinm.2020.100558
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This nationwide multi-centre study developed and validated a cascade CNN model for automatic identification of 20 cephalometric landmarks on lateral cephalograms. Using RetinaNet for region detection and U-Net for precise landmark localization, the system achieved a mean error of 1.36 ± 0.98 mm, comparable to experienced orthodontists. Performance was evaluated across sensors, hospitals and machine vendors, demonstrating high generalizability.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study presents a deep learning-based approach for dental implant planning using Artificial intelligence that is increasingly being integrated into clinical dentistry, and one of its most promising applications is implant planning using three-dimensional cone-beam computed tomography (CBCT). This study evaluated the performance of an AI system (Diagnocat, Inc., San Francisco, USA) compared with manual analysis by human experts using InvivoDental 6.0 (Anatomage, San Jose, USA). A total of 75 CBCT scans and 508 implant regions were examined to measure bone height and thickness and to detect canals, sinuses/fossae, and missing tooth regions. The jaws were divided into anterior, premolar, and molar regions for both maxilla and mandible, and comparisons between AI and manual assessments were performed using Bland–Altman analysis and the Wilcoxon signed rank test. Results showed that AI measurements of bone height closely matched manual assessments in several regions, while statistically significant differences were found in bone thickness across both jaws. Detection accuracy reached 72.2% for canals, 66.4% for sinuses/fossae, and 95.3% for missing teeth. These findings demonstrate that AI-based systems can achieve strong performance in implant planning and support clinicians by improving efficiency, reproducibility, and diagnostic confidence in dental implantology.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study introduced a deep learning method for automatic detection and numbering of teeth on dental periapical radiographs using a Faster R-CNN model with TensorFlow. The dataset included 1,250 periapical images annotated following the FDI numbering system (ISO-3950). The network achieved precision and recall above 90% and a mean IOU of 0.91 when compared to expert annotations. Post-processing steps—including overlap filtering, rule-based correction, and prediction of missing teeth—further improved results, achieving performance comparable to that of a junior dentist. The model incorporated prior dental domain knowledge, significantly enhancing tooth numbering accuracy.
Keywords:
Authors: Dr.Nielsen Santos Pereira
Year: 2025
Description: This study evaluated the feasibility of YOLOv3 for automatic caries detection and ICCMS™-based radiographic classification on bitewing radiographs under two IoU thresholds (0.50 and 0.75). A total of 994 annotated images were used for training, 256 for validation, and 175 for testing. Performance was reported across binary (non-carious vs carious), 4-class (0/RA/RB/RC), and 7-class (0, RA1–RA3, RB4, RC5–RC6) settings. YOLOv3 achieved acceptable results at both IoU50 and IoU75; metrics decreased for the 7-class task. The model localized and classified advanced caries (RC6) well, but struggled with initial enamel lesions (RA1).
Keywords: