Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TUFFC.2024.3386919 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!