A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Content-based image retrieval in radiology: analysis of variability in human perception of similarity. | LitMetric

Content-based image retrieval in radiology: analysis of variability in human perception of similarity.

J Med Imaging (Bellingham)

Stanford University , Department of Radiology, James H. Clark Center, 318 Campus Drive, W3.1, Stanford, California 94305-5441, United States.

Published: April 2015

AI Article Synopsis

  • The study explores how radiologists and non-radiologists perceive similarity in focal CT liver images to help create reference sets for image retrieval systems.
  • Observers rated the similarity of 136 pairs of lesions based on overall similarity and five specific features, revealing mostly bimodal distributions in their ratings.
  • Intra-reader agreement scores were moderate to high (0.57 to 0.86), while inter-reader agreement was lower (0.24 to 0.58), indicating significant variability in how different observers rate similarity.

Article Abstract

We aim to develop a better understanding of perception of similarity in focal computed tomography (CT) liver images to determine the feasibility of techniques for developing reference sets for training and validating content-based image retrieval systems. In an observer study, four radiologists and six nonradiologists assessed overall similarity and similarity in 5 image features in 136 pairs of focal CT liver lesions. We computed intra- and inter-reader agreements in these similarity ratings and viewed the distributions of the ratings. The readers' ratings of overall similarity and similarity in each feature primarily appeared to be bimodally distributed. Median Kappa scores for intra-reader agreement ranged from 0.57 to 0.86 in the five features and from 0.72 to 0.82 for overall similarity. Median Kappa scores for inter-reader agreement ranged from 0.24 to 0.58 in the five features and were 0.39 for overall similarity. There was no significant difference in agreement for radiologists and nonradiologists. Our results show that developing perceptual similarity reference standards is a complex task. Moderate to high inter-reader variability precludes ease of dividing up the workload of rating perceptual similarity among many readers, while low intra-reader variability may make it possible to acquire large volumes of data by asking readers to view image pairs over many sessions.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4478987PMC
http://dx.doi.org/10.1117/1.JMI.2.2.025501DOI Listing

Publication Analysis

Top Keywords

similarity
11
content-based image
8
image retrieval
8
perception similarity
8
radiologists nonradiologists
8
similarity similarity
8
median kappa
8
kappa scores
8
agreement ranged
8
perceptual similarity
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!