(1) Background: The present study aims to evaluate and compare the model performances of different convolutional neural networks (CNNs) used for classifying sagittal skeletal patterns. (2) Methods: A total of 2432 lateral cephalometric radiographs were collected. They were labeled as Class I, Class II, and Class III patterns, according to their ANB angles and Wits values. The radiographs were randomly divided into the training, validation, and test sets in the ratio of 70%:15%:15%. Four different CNNs, namely VGG16, GoogLeNet, ResNet152, and DenseNet161, were trained, and their model performances were compared. (3) Results: The accuracy of the four CNNs was ranked as follows: DenseNet161 > ResNet152 > VGG16 > GoogLeNet. DenseNet161 had the highest accuracy, while GoogLeNet possessed the smallest model size and fastest inference speed. The CNNs showed better capabilities for identifying Class III patterns, followed by Classes II and I. Most of the samples that were misclassified by the CNNs were boundary cases. The activation area confirmed the CNNs without overfitting and indicated that artificial intelligence could recognize the compensatory dental features in the anterior region of the jaws and lips. (4) Conclusions: CNNs can quickly and effectively assist orthodontists in the diagnosis of sagittal skeletal classification patterns.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9221941PMC
http://dx.doi.org/10.3390/diagnostics12061359DOI Listing

Publication Analysis

Top Keywords

convolutional neural
8
neural networks
8
model performances
8
sagittal skeletal
8
class class
8
class iii
8
iii patterns
8
vgg16 googlenet
8
cnns
7
patterns
5

Similar Publications

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.

J Imaging Inform Med

January 2025

School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ, USA.

Vision transformer (ViT)and convolutional neural networks (CNNs) each possess distinct strengths in medical imaging: ViT excels in capturing long-range dependencies through self-attention, while CNNs are adept at extracting local features via spatial convolution filters. While ViT may struggle with capturing detailed local spatial information, critical for tasks like anomaly detection in medical imaging, shallow CNNs often fail to effectively abstract global context. This study aims to explore and evaluate hybrid architectures that integrate ViT and CNN to leverage their complementary strengths for enhanced performance in medical vision tasks, such as segmentation, classification, reconstruction, and prediction.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

Multi scale multi attention network for blood vessel segmentation in fundus images.

Sci Rep

January 2025

Department of Data Science and Artificial Intelligence, Sunway University, 47500, Petaling Jaya, Selangor Darul Ehsan, Malaysia.

Precise segmentation of retinal vasculature is crucial for the early detection, diagnosis, and treatment of vision-threatening ailments. However, this task is challenging due to limited contextual information, variations in vessel thicknesses, the complexity of vessel structures, and the potential for confusion with lesions. In this paper, we introduce a novel approach, the MSMA Net model, which overcomes these challenges by replacing traditional convolution blocks and skip connections with an improved multi-scale squeeze and excitation block (MSSE Block) and Bottleneck residual paths (B-Res paths) with spatial attention blocks (SAB).

View Article and Find Full Text PDF

Bone is a common site for the metastasis of malignant tumors, and Single Photon Emission Computed Tomography (SPECT) is widely used to detect these metastases. Accurate delineation of metastatic bone lesions in SPECT images is essential for developing treatment plans. However, current clinical practices rely on manual delineation by physicians, which is prone to variability and subjective interpretation.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!