Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Objective: As an extension of optical coherence tomography (OCT), optical coherence tomographic angiography (OCTA) provides information on the blood flow status at the microlevel and is sensitive to changes in the fundus vessels. However, due to the distinct imaging mechanism of OCTA, existing models, which are primarily used for analyzing fundus images, do not work well on OCTA images. Effectively extracting and analyzing the information in OCTA images remains challenging. To this end, a deep learning framework that fuses multilevel information in OCTA images is proposed in this study. The effectiveness of the proposed model was demonstrated in the task of diabetic retinopathy (DR) classification.
Method: First, a U-Net-based segmentation model was proposed to label the boundaries of large retinal vessels and the foveal avascular zone (FAZ) in OCTA images. Then, we designed an isolated concatenated block (ICB) structure to extract and fuse information from the original OCTA images and segmentation results at different fusion levels.
Results: The experiments were conducted on 301 OCTA images. Of these images, 244 were labeled by ophthalmologists as normal images, and 57 were labeled as DR images. An accuracy of 93.1% and a mean intersection over union (mIOU) of 77.1% were achieved using the proposed large vessel and FAZ segmentation model. In the ablation experiment with 6-fold validation, the proposed deep learning framework that combines the proposed isolated and concatenated convolution process significantly improved the DR diagnosis accuracy. Moreover, inputting the merged images of the original OCTA images and segmentation results further improved the model performance. Finally, a DR diagnosis accuracy of 88.1% (95%CI ± 3.6%) and an area under the curve (AUC) of 0.92 were achieved using our proposed classification model, which significantly outperforms the state-of-the-art classification models. As a comparison, an accuracy of 83.7 (95%CI ± 1.5%) and AUC of 0.76 were obtained using EfficientNet. . The visualization results show that the FAZ and the vascular region close to the FAZ provide more information for the model than the farther surrounding area. Furthermore, this study demonstrates that a clinically sophisticated designed deep learning model is not only able to effectively assist in the diagnosis but also help to locate new indicators for certain illnesses.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9371870 | PMC |
http://dx.doi.org/10.1155/2022/4316507 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!