A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 176

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 316
Function: require_once

Performance of Chest CT-Based Artificial Intelligence Models in Distinguishing Pulmonary Mucormycosis, Invasive Pulmonary Aspergillosis, and Pulmonary Tuberculosis. | LitMetric

Performance of Chest CT-Based Artificial Intelligence Models in Distinguishing Pulmonary Mucormycosis, Invasive Pulmonary Aspergillosis, and Pulmonary Tuberculosis.

Med Mycol

National Center for Respiratory Medicine, National Clinical Research Center for Respiratory Disease, State Key Laboratory of Respiratory Disease, Guangzhou Institute of Respiratory Health, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.

Published: January 2025

In clinical practice, differentiating among pulmonary mucormycosis (PM), invasive pulmonary aspergillosis (IPA), and pulmonary tuberculosis (PTB) can be challenging. This study aimed to evaluate the performance of chest CT-based artificial intelligence (AI) models in distinguishing among these three diseases. Patients with confirmed PM, IPA, or PTB were retrospectively recruited from three tertiary hospitals. Two models were developed: unanotated supervised training (UST) model trained with original CT images and annotated supervised training (AST) model trained with manually annotated lesion images. A network questionnaire with 20 cases was designed to assess the performance of clinicians. Sensitivity, specificity, and accuracy were calculated for both models and clinicians. A total of 61 PM cases, 136 IPA cases, and 155 PTB cases were included in the study. In the internal validation set, both models had an accuracy of 66.1%. The UST model had sensitivities of 27.3%, 73.9%, and 76.0% for PM, IPA, and PTB, while AST model had sensitivities of 9.1%, 69.6%, and 88.0% for the same conditions. In the external validation set, both models had an accuracy of 57.6%. The UST model had sensitivities of 0, 85.7%, and 53.3% for PM, IPA, and PTB, respectively, while AST model had sensitivities of 0, 42.9% and 83.3%. 112 clinicians had an accuracy of 42.9%, with sensitivities of 31.5%, 43.4%, and 48.0% for PM, IPA, and PTB. We demonstrated that two AI models showed comparable performance in diagnosing three diseases. Both models achieved acceptable sensitivity in detecting IPA and PTB, but had low sensitivity in identifying PM.

Download full-text PDF

Source
http://dx.doi.org/10.1093/mmy/myae123DOI Listing

Publication Analysis

Top Keywords

ipa ptb
20
model sensitivities
16
ust model
12
ast model
12
performance chest
8
chest ct-based
8
ct-based artificial
8
artificial intelligence
8
models
8
intelligence models
8

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!