A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Task-based assessment for neural networks: evaluating undersampled MRI reconstructions based on human observer signal detection. | LitMetric

AI Article Synopsis

  • Recent research investigates using neural networks for reconstructing undersampled MRI images, emphasizing the need for task-based image quality approaches due to complex artifacts.
  • The study compares traditional global metrics like NRMSE and SSIM, with human observer performance in detecting subtle signals in undersampled images, using a U-Net model for various acceleration rates.
  • Findings indicate that while human observers favored 2× acceleration, conventional metrics suggested 3× undersampling, highlighting a discrepancy between automated metrics and human perception in assessing image quality.

Article Abstract

Purpose: Recent research explores using neural networks to reconstruct undersampled magnetic resonance imaging. Because of the complexity of the artifacts in the reconstructed images, there is a need to develop task-based approaches to image quality. We compared conventional global quantitative metrics to evaluate image quality in undersampled images generated by a neural network with human observer performance in a detection task. The purpose is to study which acceleration (2×, 3×, 4×, 5×) would be chosen with the conventional metrics and compare it to the acceleration chosen by human observer performance.

Approach: We used common global metrics for evaluating image quality: the normalized root mean squared error (NRMSE) and structural similarity (SSIM). These metrics are compared with a measure of image quality that incorporates a subtle signal for a specific task to allow for image quality assessment that locally evaluates the effect of undersampling on a signal. We used a U-Net to reconstruct under-sampled images with 2×, 3×, 4×, and 5× one-dimensional undersampling rates. Cross-validation was performed for a 500- and a 4000-image training set with both SSIM and MSE losses. A two-alternative forced choice (2-AFC) observer study was carried out for detecting a subtle signal (small blurred disk) from images with the 4000-image training set.

Results: We found that for both loss functions, the human observer performance on the 2-AFC studies led to a choice of a 2× undersampling, but the SSIM and NRMSE led to a choice of a 3× undersampling.

Conclusions: For this detection task using a subtle small signal at the edge of detectability, SSIM and NRMSE led to an overestimate of the achievable undersampling using a U-Net before a steep loss of image quality between 2×, 3×, 4×, 5× undersampling rates when compared to the performance of human observers in the detection task.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11321363PMC
http://dx.doi.org/10.1117/1.JMI.11.4.045503DOI Listing

Publication Analysis

Top Keywords

image quality
24
human observer
16
detection task
12
2× 3×
12
3× 4×
12
4× 5×
12
neural networks
8
observer performance
8
subtle signal
8
undersampling rates
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!