A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Predicting Perceived Reporting Complexity of Abdominopelvic Computed Tomography With Deep Learning. | LitMetric

Objective: The purpose of this pilot study was to examine human and automated estimates of reporting complexity for computed tomography (CT) studies of the abdomen and pelvis.

Methods: A total of 1019 CT studies were reviewed and categorized into 3 complexity categories by 3 abdominal radiologists, and the majority classification was used as ground truth. Studies were randomized into a training set of 498 studies and a test set of 521 studies. A 2-stage neural network model was trained on the training set; the first-stage image-level classifier produces image embeddings that are used in the second-stage sequential model to provide a study-level prediction.

Results: All 3 human reviewers agreed on ratings for 470 of the 1019 studies (46%); at least 2 of the 3 reviewers agreed on ratings for 1010 studies (99%). After training, the neural network model predicted complexity labels that agreed with the radiologist consensus rating on 55% of the studies; 90% of the incorrect predicted categories were errors where the predicted category differed from the consensus rating by one level of complexity.

Conclusions: There is moderate interrater agreement in radiologist-perceived reporting complexity for CT studies of the abdomen and pelvis. Automated prediction of reporting complexity in radiology studies may be a useful adjunct to radiology practice analytics.

Download full-text PDF

Source
http://dx.doi.org/10.1097/RCT.0000000000001324DOI Listing

Publication Analysis

Top Keywords

reporting complexity
16
studies
10
computed tomography
8
studies abdomen
8
1019 studies
8
training set
8
neural network
8
network model
8
reviewers agreed
8
agreed ratings
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!