Detecting the Kirsten Rat Sarcoma Virus () gene mutation is significant for colorectal cancer (CRC) patients. Thegene encodes a protein involved in the epidermal growth factor receptor (EGFR) signaling pathway, and mutations in this gene can negatively impact the use of monoclonal antibodies in anti-EGFR therapy and affect treatment decisions. Currently, commonly used methods like next-generation sequencing (NGS) identifymutations but are expensive, time-consuming, and may not be suitable for every cancer patient sample. To address these challenges, we have developed, a novel framework that predictsgene mutations from Haematoxylin and Eosin (H & E) stained WSIs that are widely available for most CRC patients.consists of two stages: the first stage filters out non-tumor regions and selects only tumour cells using a quality screening mechanism, and the second stage predicts thegene either wildtype' or mutant' using a Vision Transformer-based XCiT method. The XCiT employs cross-covariance attention to capture clinically meaningful long-range representations of textural patterns in tumour tissue andmutant cells. We evaluated the performance of the first stage using an independent CRC-5000 dataset, and the second stage included both The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) and in-house cohorts. The results of our experiments showed that the XCiT outperformed existing state-of-the-art methods, achieving AUCs for ROC curves of 0.691 and 0.653 on TCGA-CRC-DX and in-house datasets, respectively. Our findings emphasize three key consequences: the potential of using H & E-stained tissue slide images for predictinggene mutations as a cost-effective and time-efficient means for guiding treatment choice with CRC patients; the increase in performance metrics of a Transformer-based model; and the value of the collaboration between pathologists and data scientists in deriving a morphologically meaningful model.

Download full-text PDF

Source
http://dx.doi.org/10.1088/2057-1976/ad5bedDOI Listing

Publication Analysis

Top Keywords

vision transformer-based
8
predictinggene mutations
8
colorectal cancer
8
crc patients
8
second stage
8
tcga-crc-dx in-house
8
cancer
5
krasformer fully
4
fully vision
4
transformer-based framework
4

Similar Publications

Optimizing Transformer-Based Network via Advanced Decoder Design for Medical Image Segmentation.

Biomed Phys Eng Express

January 2025

Shandong University, No. 72, Binhai Road, Jimo, Qingdao City, Shandong Province, Qingdao, 266200, CHINA.

U-Net is widely used in medical image segmentation due to its simple and flexible architecture design. To address the challenges of scale and complexity in medical tasks, several variants of U-Net have been proposed. In particular, methods based on Vision Transformer (ViT), represented by Swin UNETR, have gained widespread attention in recent years.

View Article and Find Full Text PDF

Vision transformer-based multimodal fusion network for classification of tumor malignancy on breast ultrasound: A retrospective multicenter study.

Int J Med Inform

January 2025

School of Computer Science and Engineering, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, PR China. Electronic address:

Background: In the context of routine breast cancer diagnosis, the precise discrimination between benign and malignant breast masses holds utmost significance. Notably, few prior investigations have concurrently explored the integration of imaging histology features, deep learning characteristics, and clinical parameters. The primary objective of this retrospective study was to pioneer a multimodal feature fusion model tailored for the prediction of breast tumor malignancy, harnessing the potential of ultrasound images.

View Article and Find Full Text PDF

AVP-GPT2: A Transformer-Powered Platform for De Novo Generation, Screening, and Explanation of Antiviral Peptides.

Viruses

December 2024

Beijing Youcare Kechuang Pharmaceutical Technology Co., Ltd., Beijing 100176, China.

Human respiratory syncytial virus (RSV) remains a significant global health threat, particularly for vulnerable populations. Despite extensive research, effective antiviral therapies are still limited. To address this urgent need, we present AVP-GPT2, a deep-learning model that significantly outperforms its predecessor, AVP-GPT, in designing and screening antiviral peptides.

View Article and Find Full Text PDF

Background: Food image recognition, a crucial step in computational gastronomy, has diverse applications across nutritional platforms. Convolutional neural networks (CNNs) are widely used for this task due to their ability to capture hierarchical features. However, they struggle with long-range dependencies and global feature extraction, which are vital in distinguishing visually similar foods or images where the context of the whole dish is crucial, thus necessitating transformer architecture.

View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test.

Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!