A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1034
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3152
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Frequency and determinants of disagreement and error in gleason scores: a population-based study of prostate cancer. | LitMetric

Background: To examine factors that affect accuracy and reliability of prostate cancer grade we compared Gleason scores documented in pathology reports and those assigned by urologic pathologists in a population-based study.

Methods: A stratified random sample of 318 prostate cancer cases was selected to ensure representation of whites and African-Americans and to include facilities of various types. The slides borrowed from reporting facilities were scanned and the resulting digital images were re-reviewed by two urologic pathologists. If the two urologic pathologists disagreed, a third urologic pathologist was asked to help arrive at a final "gold standard" result. The agreements between reviewers and between the pathology reports and the "gold standard" were examined by calculating kappa statistics. The determinants of discordance in Gleason scores were evaluated using multivariate models with results expressed as odds ratios (OR) and 95% confidence intervals (CI).

Results: The kappa values (95% CI) reflecting agreement between the pathology reports and the "gold standard," were 0.61 (95% CI: 0.54, 0.68) for biopsies, and 0.37 (0.23, 0.51) for prostatectomies. Sixty three percent of discordant biopsies and 72% of discordant prostatectomies showed only minimal differences. Using freestanding laboratories as reference, the likelihood of discordance between pathology reports and expert-assigned biopsy Gleason scores was particularly elevated for small community hospitals (OR = 2.98; 95% CI: 1.73, 5.14).

Conclusions: The level of agreement between pathology reports and expert review depends on the type of diagnosing facility, but may also depend on the level of expertise and specialization of individual pathologists.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3339279PMC
http://dx.doi.org/10.1002/pros.22484DOI Listing

Publication Analysis

Top Keywords

pathology reports
20
gleason scores
16
prostate cancer
12
urologic pathologists
12
"gold standard"
12
reports "gold
8
agreement pathology
8
pathology
5
reports
5
frequency determinants
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!