Eur J Nucl Med Mol Imaging
December 2024
Purpose: Extranodal natural killer/T-cell lymphoma (ENKTCL) is an hematologic malignancy with prognostic heterogeneity. We aimed to develop and validate DeepENKTCL, an interpretable deep learning prediction system for prognosis risk stratification in ENKTCL.
Methods: A total of 562 patients from four centers were divided into the training cohort, validation cohort and test cohort.
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field.
View Article and Find Full Text PDFIn the study of the deep learning classification of medical images, deep learning models are applied to analyze images, aiming to achieve the goals of assisting diagnosis and preoperative assessment. Currently, most research classifies and predicts normal and cancer cells by inputting single-parameter images into trained models. However, for ovarian cancer (OC), identifying its different subtypes is crucial for predicting disease prognosis.
View Article and Find Full Text PDFBackground: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns.
Methods: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis.
Unsupervised domain adaptation (UDA) aims to train a model on a labeled source domain and adapt it to an unlabeled target domain. In medical image segmentation field, most existing UDA methods rely on adversarial learning to address the domain gap between different image modalities. However, this process is complicated and inefficient.
View Article and Find Full Text PDFIEEE J Biomed Health Inform
February 2024
Computer-aided diagnosis of chest X-ray (CXR) images can help reduce the huge workload of radiologists and avoid the inter-observer variability in large-scale early disease screening. Recently, most state-of-the-art studies employ deep learning techniques to address this problem through multi-label classification. However, existing methods still suffer from low classification accuracy and poor interpretability for each diagnostic task.
View Article and Find Full Text PDFHistopathological images contain abundant phenotypic information and pathological patterns, which are the gold standards for disease diagnosis and essential for the prediction of patient prognosis and treatment outcome. In recent years, computer-automated analysis techniques for histopathological images have been urgently required in clinical practice, and deep learning methods represented by convolutional neural networks have gradually become the mainstream in the field of digital pathology. However, obtaining large numbers of fine-grained annotated data in this field is a very expensive and difficult task, which hinders the further development of traditional supervised algorithms based on large numbers of annotated data.
View Article and Find Full Text PDFIntravenous thrombolysis is the most commonly used drug therapy for patients with acute ischemic stroke, which is often accompanied by complications of intracerebral hemorrhage transformation (HT). This study proposed to build a reliable model for pretreatment prediction of HT. Specifically, 5400 radiomics features were extracted from 20 regions of interest (ROIs) of multiparametric MRI images of 71 patients.
View Article and Find Full Text PDF