A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Deep learning-assisted classification of calcaneofibular ligament injuries in the ankle joint. | LitMetric

Background: The classification of calcaneofibular ligament (CFL) injuries on magnetic resonance imaging (MRI) is time-consuming and subject to substantial interreader variability. This study explores the feasibility of classifying CFL injuries using deep learning methods by comparing them with the classifications of musculoskeletal (MSK) radiologists and further examines image cropping screening and calibration methods.

Methods: The imaging data of 1,074 patients who underwent ankle arthroscopy and MRI examinations in our hospital were retrospectively analyzed. According to the arthroscopic findings, patients were divided into normal (class 0, n=475); degeneration, strain, and partial tear (class 1, n=217); and complete tear (class 2, n=382) groups. All patients were divided into training, validation, and test sets at a ratio of 8:1:1. After preprocessing, the images were cropped using Mask region-based convolutional neural network (R-CNN), followed by the application of an attention algorithm for image screening and calibration and the implementation of LeNet-5 for CFL injury classification. The diagnostic effects of the axial, coronal, and combined models were compared, and the best method was selected for outgroup validation. The diagnostic results of the models in the intragroup and outgroup test sets were compared with those results of 4 MSK radiologists of different seniorities.

Results: The mean average precision (mAP) of the Mask R-CNN using the attention algorithm for the left and right image cropping of axial and coronal sequences was 0.90-0.96. The accuracy of LeNet-5 for classifying classes 0-2 was 0.92, 0.93, and 0.92, respectively, for the axial sequences and 0.89, 0.92, and 0.90, respectively, for the coronal sequences. After sequence combination, the classification accuracy for classes 0-2 was 0.95, 0.97, and 0.96, respectively. The mean accuracies of the 4 MSK radiologists in classifying the intragroup test set as classes 0-2 were 0.94, 0.91, 0.86, and 0.85, all of which were significantly different from the model. The mean accuracies of the MSK radiologists in classifying the outgroup test set as classes 0-2 were 0.92, 0.91, 0.87, and 0.85, with the 2 senior MSK radiologists demonstrating similar diagnostic performance to the model and the junior MSK radiologists demonstrating worse accuracy.

Conclusions: Deep learning can be used to classify CFL injuries at similar levels to those of MSK radiologists. Adding an attention algorithm after cropping is helpful for accurately cropping CFL images.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9816759PMC
http://dx.doi.org/10.21037/qims-22-470DOI Listing

Publication Analysis

Top Keywords

msk radiologists
28
classes 0-2
16
cfl injuries
12
attention algorithm
12
classification calcaneofibular
8
calcaneofibular ligament
8
deep learning
8
image cropping
8
screening calibration
8
patients divided
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!