Rationale And Objectives: Evidence is inconsistent about whether radiologists' interpretive performance on a screening mammography test set reflects their performance in clinical practice. This study aimed to estimate the correlation between test set and clinical performance and determine if the correlation is influenced by cancer prevalence or lesion difficulty in the test set.
Materials And Methods: This institutional review board-approved study randomized 83 radiologists from six Breast Cancer Surveillance Consortium registries to assess one of four test sets of 109 screening mammograms each; 48 radiologists completed a fifth test set of 110 mammograms 2 years later.
Purpose: The aim of this study was to assess agreement of mammographic interpretations by community radiologists with consensus interpretations of an expert radiology panel to inform approaches that improve mammographic performance.
Methods: From 6 mammographic registries, 119 community-based radiologists were recruited to assess 1 of 4 randomly assigned test sets of 109 screening mammograms with comparison studies for no recall or recall, giving the most significant finding type (mass, calcifications, asymmetric density, or architectural distortion) and location. The mean proportion of agreement with an expert radiology panel was calculated by cancer status, finding type, and difficulty level of identifying the finding at the patient, breast, and lesion level.
Purpose: Mammography technologists' level of training, years of experience, and feedback on technique may play an important role in the breast-cancer screening process. However, information on the mammography technologist workforce is scant.
Methods: In 2013, we conducted a survey mailed to 912 mammography technologists working in 224 facilities certified by the Mammography Quality Standards Act in North Carolina.
Objective: The purpose of this study was to determine whether the technologist has an effect on the radiologists' interpretative performance of diagnostic mammography.
Materials And Methods: Using data from a community-based mammography registry from 1994 to 2009, we identified 162,755 diagnostic mammograms interpreted by 286 radiologists and performed by 303 mammographic technologists. We calculated sensitivity, false-positive rate, and positive predictive value (PPV) of the recommendation for biopsy from mammography for examinations performed (i.
Rationale And Objectives: To determine whether the mammographic technologist has an effect on the radiologists' interpretative performance of screening mammography in community practice.
Materials And Methods: In this institutional review board-approved retrospective cohort study, we included Carolina Mammography Registry data from 372 radiologists and 356 mammographic technologists from 1994 to 2009 who performed 1,003,276 screening mammograms. Measures of interpretative performance (recall rate, sensitivity, specificity, positive predictive value [PPV1], and cancer detection rate [CDR]) were ascertained prospectively with cancer outcomes collected from the state cancer registry and pathology reports.