Purpose: Differentiating primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) is crucial because their prognosis and treatment differ substantially. Manual examination of their histological characteristics is considered the golden standard in clinical diagnosis. However, this process is tedious and time-consuming and might lead to misdiagnosis caused by morphological similarity between their histology and tumor heterogeneity. Existing research focuses on radiological differentiation, which mostly uses multi-parametric magnetic resonance imaging. By contrast, we investigate the pathological differentiation between the two types of tumors using whole slide images (WSIs) of postoperative formalin-fixed paraffin-embedded samples.
Approach: To learn the specific and intrinsic histological feature representations from the WSI patches, a self-supervised feature extractor is trained. Then, the patch representations are fused by feeding into a weakly supervised multiple-instance learning model for the WSI classification. We validate our approach on 134 PCNSL and 526 GBM cases collected from three hospitals. We also investigate the effect of feature extraction on the final prediction by comparing the performance of applying the feature extractors trained on the PCNSL/GBM slides from specific institutions, multi-site PCNSL/GBM slides, and large-scale histopathological images.
Results: Different feature extractors perform comparably with the overall area under the receiver operating characteristic curve value exceeding 85% for each dataset and close to 95% for the combined multi-site dataset. Using the institution-specific feature extractors generally obtains the best overall prediction with both of the PCNSL and GBM classification accuracies reaching 80% for each dataset.
Conclusions: The excellent classification performance suggests that our approach can be used as an assistant tool to reduce the pathologists' workload by providing an accurate and objective second diagnosis. Moreover, the discriminant regions indicated by the generated attention heatmap improve the model interpretability and provide additional diagnostic information.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724367 | PMC |
http://dx.doi.org/10.1117/1.JMI.12.1.017502 | DOI Listing |
Sensors (Basel)
January 2025
Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea.
The Transformer model has received significant attention in Human Activity Recognition (HAR) due to its self-attention mechanism that captures long dependencies in time series. However, for Inertial Measurement Unit (IMU) sensor time-series signals, the Transformer model does not effectively utilize the a priori information of strong complex temporal correlations. Therefore, we proposed using multi-layer convolutional layers as a Convolutional Feature Extractor Block (CFEB).
View Article and Find Full Text PDFAdv Sci (Weinh)
January 2025
Computational Science Research Center, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea.
Efficiently extracting data from tables in the scientific literature is pivotal for building large-scale databases. However, the tables reported in materials science papers exist in highly diverse forms; thus, rule-based extractions are an ineffective approach. To overcome this challenge, the study presents MaTableGPT, which is a GPT-based table data extractor from the materials science literature.
View Article and Find Full Text PDFJ Imaging
January 2025
Department of Computer Science, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada.
The safety and efficiency of assembly lines are critical to manufacturing, but human supervisors cannot oversee all activities simultaneously. This study addresses this challenge by performing a comparative study to construct an initial real-time, semi-supervised temporal action recognition setup for monitoring worker actions on assembly lines. Various feature extractors and localization models were benchmarked using a new assembly dataset, with the I3D model achieving an average mAP@IoU=0.
View Article and Find Full Text PDFFront Neurosci
January 2025
School of Data Science, Lingnan University, Hong Kong SAR, China.
Accurate monitoring of drowsy driving through electroencephalography (EEG) can effectively reduce traffic accidents. Developing a calibration-free drowsiness detection system with single-channel EEG alone is very challenging due to the non-stationarity of EEG signals, the heterogeneity among different individuals, and the relatively parsimonious compared to multi-channel EEG. Although deep learning-based approaches can effectively decode EEG signals, most deep learning models lack interpretability due to their black-box nature.
View Article and Find Full Text PDFMed Image Anal
January 2025
Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Southern Medical University, Guangzhou, China. Electronic address:
Deep multiple instance learning (MIL) pipelines are the mainstream weakly supervised learning methodologies for whole slide image (WSI) classification. However, it remains unclear how these widely used approaches compare to each other, given the recent proliferation of foundation models (FMs) for patch-level embedding and the diversity of slide-level aggregations. This paper implemented and systematically compared six FMs and six recent MIL methods by organizing different feature extractions and aggregations across seven clinically relevant end-to-end prediction tasks using WSIs from 4044 patients with four different cancer types.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!