Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Objective: To investigate the repeatability and reproducibility of the presence of a circumferential femoral head osteophyte (CFHO), a curvilinear caudolateral osteophyte (CCO), osteosclerosis of the cranial acetabular edge (Scler CrAE), degenerative joint disease (DJD), and the diagnosis of suspected canine hip dysplasia (CHD) in different groups of experienced observers.
Study Design: Cross-sectional study.
Sample Population: Standard hip extended radiographs (n = 50).
Methods: Nine experienced observers were divided into 3 groups: surgeons (DECVS), radiologists (DECVDI), and non-board certified observers (NBC) and 2 subgroups (academics and non-academics). Cohen's kappa (κ) was calculated for CFHO, CCO, Scler CrAE, DJD, and suspected CHD, and weighted κ was calculated for DJD score to determine inter- and intraobserver agreement.
Results: Intraobserver agreement on CFHO, CCO, Scler CrAE, DJD, and suspected CHD ranged from slight to almost perfect, but was not significantly different between NBC, DECVS, and DECVDI. Radiologists and non-board certified observers had a more uniform scoring than surgeons on the overall DJD score, as did academics versus non-academics. Interobserver agreement for NBC was more uniform than that of radiologists and surgeons on CCO and DJD. NBC and radiologists scored more uniformly than surgeons on CFHO, and radiologists scored more uniformly than NBC and surgeons on Scler CrAE. Academics scored more uniformly than non-academics, but only significantly for Scler CrAE.
Conclusions: Recognition of specific radiographic markers is only fairly reliable within and between experienced observers. Therefore, care must be taken to apply these traits in official screening, surgical decision-making and scientific research.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/j.1532-950X.2014.12309.x | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!