Background: Lateral cephalometric radiograph (LCR) is crucial to diagnosis and treatment planning of maxillofacial diseases, but inappropriate head position, which reduces the accuracy of cephalometric measurements, can be challenging to detect for clinicians. This non-interventional retrospective study aims to develop two deep learning (DL) systems to efficiently, accurately, and instantly detect the head position on LCRs.
Methods: LCRs from 13 centers were reviewed and a total of 3000 radiographs were collected and divided into 2400 cases (80.0 %) in the training set and 600 cases (20.0 %) in the validation set. Another 300 cases were selected independently as the test set. All the images were evaluated and landmarked by two board-certified orthodontists as references. The head position of the LCR was classified by the angle between the Frankfort Horizontal (FH) plane and the true horizontal (HOR) plane, and a value within - 3°- 3° was considered normal. The YOLOv3 model based on the traditional fixed-point method and the modified ResNet50 model featuring a non-linear mapping residual network were constructed and evaluated. Heatmap was generated to visualize the performances.
Results: The modified ResNet50 model showed a superior classification accuracy of 96.0 %, higher than 93.5 % of the YOLOv3 model. The sensitivity&recall and specificity of the modified ResNet50 model were 0.959, 0.969, and those of the YOLOv3 model were 0.846, 0.916. The area under the curve (AUC) values of the modified ResNet50 and the YOLOv3 model were 0.985 ± 0.04 and 0.942 ± 0.042, respectively. Saliency maps demonstrated that the modified ResNet50 model considered the alignment of cervical vertebras, not just the periorbital and perinasal areas, as the YOLOv3 model did.
Conclusions: The modified ResNet50 model outperformed the YOLOv3 model in classifying head position on LCRs and showed promising potential in facilitating making accurate diagnoses and optimal treatment plans.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.aanat.2023.152114 | DOI Listing |
Front Plant Sci
December 2024
Institute of Technology, Anhui Agricultural University, Hefei, China.
Introduction: The rapid urbanization of rural regions, along with an aging population, has resulted in a substantial manpower scarcity for agricultural output, necessitating the urgent development of highly intelligent and accurate agricultural equipment technologies.
Methods: This research introduces YOLOv8-PSS, an enhanced lightweight obstacle detection model, to increase the effectiveness and safety of unmanned agricultural robots in intricate field situations. This YOLOv8-based model incorporates a depth camera to precisely identify and locate impediments in the way of autonomous agricultural equipment.
Curr Med Imaging
November 2024
Digital of Healthcare Research Center, Institute of Information Technology and Convergence, Pukyong National University, Busan, 48513, Republic of Korea.
Introduction: This research assesses HRNet and ResNet architectures for their precision in localizing hand acupoints on 2D images, which is integral to automated acupuncture therapy.
Objectives: The primary objective was to advance the accuracy of acupoint detection in traditional Korean medicine through the application of these advanced deep-learning models, aiming to improve treatment efficacy.
Background: Acupoint localization in traditional Korean medicine is crucial for effective treatment, and the study aims to enhance this process using advanced deep-learning models.
Sci Rep
November 2024
Department of Computer Science and Engineering, Bangabandhu Sheikh Mujibur Rahman Science and Technology University, Pirojpur-8500, Bangladesh.
Animals (Basel)
October 2024
College of Artificial Intelligence, Tianjin University of Science and Technology, Tianjin 300453, China.
Plants (Basel)
September 2024
China Agricultural University, Beijing 100083, China.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!