Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Fundus ultrasound image classification is a critical issue in the medical field. Vitreous opacity (VO) and posterior vitreous detachment (PVD) are two common eye diseases, Now, the diagnosis of these two diseases mainly relies on manual identification by doctors. This method has the disadvantages of time-consuming and manual investment, so it is very meaningful to use computer technology to assist doctors in diagnosis. This paper is the first to apply the deep learning model to VO and PVD classification tasks. Convolutional neural network (CNN) is widely used in image classification. Traditional CNN requires a large amount of training data to prevent overfitting, and it is difficult to learn the differences between two kinds of images well. In this paper, we propose an end-to-end siamese convolutional neural network with multi-attention (SVK_MA) for automatic classification of VO and PVD fundus ultrasound images. SVK_MA is a siamese-structure network in which each branch is mainly composed of pretrained VGG16 embedded with multiple attention models. Each image first is normalized, then is sent to SVK_MA to extract features from the normalized images, and finally gets the classification result. Our approach has been validated on the dataset provided by the cooperative hospital. The experimental results show that our approach achieves the accuracy of 0.940, precision of 0.941, recall of 0.940, F1 of 0.939 which are respectively increased by 2.5%, 1.9%, 3.4% and 2.5% compared with the second highest model.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10324257 | PMC |
http://dx.doi.org/10.1186/s12880-023-01047-w | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!