In acute ischaemic stroke, identifying brain tissue at high risk of infarction is important for clinical decision-making. This tissue may be identified with suitable classification methods from magnetic resonance imaging data. The aim of the present study was to assess and compare the performance of five popular classification methods (adaptive boosting, logistic regression, artificial neural networks, random forest and support vector machine) in identifying tissue at high risk of infarction on human voxel-based brain imaging data. The classification methods were used with eight MRI parameters, including diffusion-weighted imaging and perfusion-weighted imaging obtained in 55 patients. The five criteria used to assess the performance of the methods were the area under the receiver operating curve (AUC ), the area under the precision-recall curve (AUC ), sensitivity, specificity and the Dice coefficient. The methods performed equally in terms of sensitivity and specificity, while the results of AUC and the Dice coefficient were significantly better for adaptive boosting, logistic regression, artificial neural networks and random forest. However, there was no statistically significant difference between the performances of these five classification methods regarding AUC , which was the main comparison metric. Machine learning methods can provide valuable prognostic information using multimodal imaging data in acute ischaemic stroke, which in turn can assist in developing personalized treatment decision for clinicians after a thorough validation of methods with an independent data set.

Download full-text PDF

Source
http://dx.doi.org/10.1111/ejn.14507DOI Listing

Publication Analysis

Top Keywords

classification methods
20
ischaemic stroke
12
imaging data
12
methods
9
acute ischaemic
8
tissue high
8
high risk
8
risk infarction
8
adaptive boosting
8
boosting logistic
8

Similar Publications

Syphilis-positive and false-positive trends among US blood donors, 2013-2023.

Transfusion

January 2025

Infectious Disease Consultant, North Potomac, Maryland, USA.

Background: US blood donors are tested for syphilis because the bacterial agent is transfusion transmissible. Here we describe trends over an 11-year period of donations positive for recent and past syphilis infections, and donations classified as syphilis false positive (FP).

Methods: Data from January 1, 2013, to December 31, 2023 (11 years) were compiled for all American Red Cross blood donations to evaluate demographics/characteristics and longitudinal trends in donors testing syphilis reactive/positive.

View Article and Find Full Text PDF

In Vivo Confocal Microscopy for Automated Detection of Meibomian Gland Dysfunction: A Study Based on Deep Convolutional Neural Networks.

J Imaging Inform Med

January 2025

Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, National Clinical Research Center for Eye Disease, Shanghai, 200080, China.

The objectives of this study are to construct a deep convolutional neural network (DCNN) model to diagnose and classify meibomian gland dysfunction (MGD) based on the in vivo confocal microscope (IVCM) images and to evaluate the performance of the DCNN model and its auxiliary significance for clinical diagnosis and treatment. We extracted 6643 IVCM images from the three hospitals' IVCM database as the training set for the DCNN model and 1661 IVCM images from the other two hospitals' IVCM database as the test set to examine the performance of the model. Construction of the DCNN model was performed using DenseNet-169.

View Article and Find Full Text PDF

Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers.

J Imaging Inform Med

January 2025

College of Engineering, Department of Computer Engineering, Koç University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.

This study explores a transfer learning approach with vision transformers (ViTs) and convolutional neural networks (CNNs) for classifying retinal diseases, specifically diabetic retinopathy, glaucoma, and cataracts, from ophthalmoscopy images. Using a balanced subset of 4217 images and ophthalmology-specific pretrained ViT backbones, this method demonstrates significant improvements in classification accuracy, offering potential for broader applications in medical imaging. Glaucoma, diabetic retinopathy, and cataracts are common eye diseases that can cause vision loss if not treated.

View Article and Find Full Text PDF

In this paper, we propose a method to address the class imbalance learning in the classification of focal liver lesions (FLLs) from abdominal CT images. Class imbalance is a significant challenge in medical image analysis, making it difficult for machine learning models to learn to classify them accurately. To overcome this, we propose a class-wise combination of mixture-based data augmentation (CCDA) method that uses two mixture-based data augmentation techniques, MixUp and AugMix.

View Article and Find Full Text PDF

Unveiling the role of PANoptosis-related genes in breast cancer: an integrated study by multi-omics analysis and machine learning algorithms.

Breast Cancer Res Treat

January 2025

Department of Breast Surgery, Thyroid Surgery, Huangshi Central Hospital, Affiliated Hospital of Hubei Polytechnic University, No.141, Tianjin Road, Huangshi, 435000, Hubei, China.

Background: The heterogeneity of breast cancer (BC) necessitates the identification of novel subtypes and prognostic models to enhance patient stratification and treatment strategies. This study aims to identify novel BC subtypes based on PANoptosis-related genes (PRGs) and construct a robust prognostic model to guide individualized treatment strategies.

Methods: The transcriptome data along with clinical data of BC patients were sourced from the TCGA and GEO databases.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!