Artificial intelligence (AI) algorithms evaluating [supine] chest radiographs ([S]CXRs) have remarkably increased in number recently. Since training and validation are often performed on subsets of the same overall dataset, external validation is mandatory to reproduce results and reveal potential training errors. We applied a multicohort benchmarking to the publicly accessible (S)CXR analyzing AI algorithm CheXNet, comprising three clinically relevant study cohorts which differ in patient positioning ([S]CXRs), the applied reference standards (CT-/[S]CXR-based) and the possibility to also compare algorithm classification with different medical experts' reading performance.
View Article and Find Full Text PDFEur Radiol
October 2021
Objectives: Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm's performance and suppresses confounders.
Methods: Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs.
Objectives: We hypothesized that published performances of algorithms for artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXRs) do not sufficiently consider the influence of PTX size and confounding effects caused by thoracic tubes (TTs). Therefore, we established a radiologically annotated benchmarking cohort (n = 6446) allowing for a detailed subgroup analysis.
Materials And Methods: We retrospectively identified 6434 supine CXRs, among them 1652 PTX-positive cases and 4782 PTX-negative cases.