Accurate preoperative recurrence prediction for non-small cell lung cancer (NSCLC) is a challenging issue in the medical field. Existing studies primarily conduct image and molecular analyses independently or directly fuse multimodal information through radiomics and genomics, which fail to fully exploit and effectively utilize the highly heterogeneous cross-modal information at different levels and model the complex relationships between modalities, resulting in poor fusion performance and becoming the bottleneck of precise recurrence prediction. To address these limitations, we propose a novel unified framework, the Self-and-Mutual Attention (SAMA) Network, designed to efficiently fuse and utilize macroscopic CT images and microscopic gene data for precise NSCLC recurrence prediction, integrating handcrafted features, deep features, and gene features. Specifically, we design a Self-and-Mutual Attention Module that performs three-stage fusion: the self-enhancement stage enhances modality-specific features; the gene-guided and CT-guided cross-modality fusion stages perform bidirectional cross-guidance on the self-enhanced features, complementing and refining each modality, enhancing heterogeneous feature expression; and the optimized feature aggregation stage ensures the refined interactive features for precise prediction. Extensive experiments on both publicly available datasets from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) demonstrate that our method achieves state-of-the-art performance and exhibits broad applicability to various cancers.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/JBHI.2024.3471194 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!