Objective: The performance of 18 F-FDG PET-based radiomics and deep learning in detecting pathological regional nodal metastasis (pN+) in resectable lung adenocarcinoma varies, and their use across different generations of PET machines has not been thoroughly investigated. We compared handcrafted radiomics and deep learning using different PET scanners to predict pN+ in resectable lung adenocarcinoma.

Methods: We retrospectively analyzed pretreatment 18 F-FDG PET from 148 lung adenocarcinoma patients who underwent curative surgery. Patients were separated into analog (n = 131) and digital (n = 17) PET cohorts. Handcrafted radiomics and a ResNet-50 deep-learning model of the primary tumor were used to predict pN+ status. Models were trained in the analog PET cohort, and the digital PET cohort was used for cross-scanner validation.

Results: In the analog PET cohort, entropy, a handcrafted radiomics, independently predicted pN+. However, the areas under the receiver-operating-characteristic curves (AUCs) and accuracy for entropy were only 0.676 and 62.6%, respectively. The ResNet-50 model demonstrated a better AUC and accuracy of 0.929 and 94.7%, respectively. In the digital PET validation cohort, the ResNet-50 model also demonstrated better AUC (0.871 versus 0.697) and accuracy (88.2% versus 64.7%) than entropy. The ResNet-50 model achieved comparable specificity to visual interpretation but with superior sensitivity (83.3% versus 66.7%) in the digital PET cohort.

Conclusion: Applying deep learning across different generations of PET scanners may be feasible and better predict pN+ than handcrafted radiomics. Deep learning may complement visual interpretation and facilitate tailored therapeutic strategies for resectable lung adenocarcinoma.

Download full-text PDF

Source
http://dx.doi.org/10.1097/MNM.0000000000001776DOI Listing

Publication Analysis

Top Keywords

deep learning
20
radiomics deep
16
resectable lung
16
lung adenocarcinoma
16
handcrafted radiomics
16
predict pn+
12
pet cohort
12
digital pet
12
resnet-50 model
12
pet
10

Similar Publications

CryoSamba: Self-supervised deep volumetric denoising for cryo-electron tomography data.

J Struct Biol

December 2024

Program in Cellular and Molecular Medicine, Boston Children's Hospital, 200 Longwood Ave, Boston, MA 02115, USA; Department of Cell Biology, Harvard Medical School, 200 Longwood Ave, Boston, MA 02115, USA; Department of Pediatrics, Harvard Medical School, 200 Longwood Ave, Boston, MA 02115, USA. Electronic address:

Cryogenic electron tomography (cryo-ET) has rapidly advanced as a high-resolution imaging tool for visualizing subcellular structures in 3D with molecular detail. Direct image inspection remains challenging due to inherent low signal-to-noise ratios (SNR). We introduce CryoSamba, a self-supervised deep learning-based model designed for denoising cryo-ET images.

View Article and Find Full Text PDF

Robust multi-modal fusion architecture for medical data with knowledge distillation.

Comput Methods Programs Biomed

December 2024

School of Biomedical Engineering, Capital Medical University, No.10, Xitoutiao, You An Men, Fengtai District, Beijing 100069, China; Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, No.10, Xitoutiao, You An Men, Fengtai District, Beijing 100069, China. Electronic address:

Background: The fusion of multi-modal data has been shown to significantly enhance the performance of deep learning models, particularly on medical data. However, missing modalities are common in medical data due to patient specificity, which poses a substantial challenge to the application of these models.

Objective: This study aimed to develop a novel and efficient multi-modal fusion framework for medical datasets that maintains consistent performance, even in the absence of one or more modalities.

View Article and Find Full Text PDF

AI model using CT-based imaging biomarkers to predict hepatocellular carcinoma in patients with chronic hepatitis B.

J Hepatol

December 2024

Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Korea; Inocras Inc., San Diego, CA, USA. Electronic address:

Background & Aims: Various hepatocellular carcinoma (HCC) prediction models have been proposed for patients with chronic hepatitis B (CHB) using clinical variables. We aimed to develop an artificial intelligence (AI)-based HCC prediction model by incorporating imaging biomarkers derived from abdominal computed tomography (CT) images along with clinical variables.

Methods: An AI prediction model employing a gradient-boosting machine algorithm was developed utilizing imaging biomarkers extracted by DeepFore, a deep learning-based CT auto-segmentation software.

View Article and Find Full Text PDF

Combination of deep learning reconstruction and quantification for dynamic contrast-enhanced (DCE) MRI.

Magn Reson Imaging

December 2024

Weill Cornell Graduate School of Medical Sciences, New York, NY, United States; Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States; Department of Radiology, Memorial Sloan Kettering Cancer Center, NY, New York, USA.

Dynamic contrast-enhanced (DCE) MRI is an important imaging tool for evaluating tumor vascularity that can lead to improved characterization of tumor extent and heterogeneity, and for early assessment of treatment response. However, clinical adoption of quantitative DCE-MRI remains limited due to challenges in acquisition and quantification performance, and lack of automated tools. This study presents an end-to-end deep learning pipeline that exploits a novel deep reconstruction network called DCE-Movienet with a previously developed deep quantification network called DCE-Qnet for fast and quantitative DCE-MRI.

View Article and Find Full Text PDF

Purpose: Automated treatment plan generation is essential for magnetic resonance imaging (MRI)-guided adaptive radiotherapy (MRIgART) to ensure standardized treatment-plan quality. We proposed a novel cross-technique transfer learning (CTTL)-based strategy for online MRIgART autoplanning.

Method: We retrospectively analyzed the data from 210 rectal cancer patients.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!