Background: The national lung screening trial (NLST) demonstrated a reduction in lung cancer mortality with lowdose CT (LDCT) compared to chest x-ray (CXR) screening. Overdiagnosis was high (79%) among bronchoalveolar carcinoma (BAC) currently replaced by adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA) and adenocarcinoma of low malignant potential (LMP) exhibiting 100% disease specific survival (DSS).
Objective: Compare the outcomes and proportions of BAC, AIS, MIA, and LMP among NLST screendetected stage IA NSCLC with overdiagnosis rate.
Microscopic vascular invasion (VI) is predictive of recurrence and benefit from lobectomy in stage I lung adenocarcinoma (LUAD) but is difficult to assess in resection specimens and cannot be accurately predicted prior to surgery. Thus, new biomarkers are needed to identify this aggressive subset of stage I LUAD tumors. To assess molecular and microenvironment features associated with angioinvasive LUAD we profiled 162 resected stage I tumors with and without VI by RNA-seq and explored spatial patterns of gene expression in a subset of 15 samples by high-resolution spatial transcriptomics (stRNA-seq).
View Article and Find Full Text PDFBronchial premalignant lesions (PMLs) precede the development of invasive lung squamous cell carcinoma (LUSC), posing a significant challenge in distinguishing those likely to advance to LUSC from those that might regress without intervention. This study followed a novel computational approach, the Graph Perceiver Network, leveraging hematoxylin and eosin-stained whole slide images to stratify endobronchial biopsies of PMLs across a spectrum from normal to tumor lung tissues. The Graph Perceiver Network outperformed existing frameworks in classification accuracy predicting LUSC, lung adenocarcinoma, and nontumor lung tissue on The Cancer Genome Atlas and Clinical Proteomic Tumor Analysis Consortium datasets containing lung resection tissues while efficiently generating pathologist-aligned, class-specific heatmaps.
View Article and Find Full Text PDFIEEE Trans Med Imaging
September 2024
Multimodal machine learning models are being developed to analyze pathology images and other modalities, such as gene expression, to gain clinical and biological insights. However, most frameworks for multimodal data fusion do not fully account for the interactions between different modalities. Here, we present an attention-based fusion architecture that integrates a graph representation of pathology images with gene expression data and concomitantly learns from the fused information to predict patient-specific survival.
View Article and Find Full Text PDF