As many as 80% of critically ill patients develop delirium increasing the need for institutionalization and higher morbidity and mortality. Clinicians detect less than 40% of delirium when using a validated screening tool. EEG is the criterion standard but is resource intensive thus not feasible for widespread delirium monitoring. This study evaluated the use of limited-lead rapid-response EEG and supervised deep learning methods with vision transformer to predict delirium. This proof-of-concept study used a prospective design to evaluate use of supervised deep learning with vision transformer and a rapid-response EEG device for predicting delirium in mechanically ventilated critically ill older adults. Fifteen different models were analyzed. Using all available data, the vision transformer models provided 99.9%+ training and 97% testing accuracy across models. Vision transformer with rapid-response EEG is capable of predicting delirium. Such monitoring is feasible in critically ill older adults. Therefore, this method has strong potential for improving the accuracy of delirium detection, providing greater opportunity for individualized interventions. Such an approach may shorten hospital length of stay, increase discharge to home, decrease mortality, and reduce the financial burden associated with delirium.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10188429PMC
http://dx.doi.org/10.1038/s41598-023-35004-yDOI Listing

Publication Analysis

Top Keywords

vision transformer
20
supervised deep
12
deep learning
12
critically ill
12
rapid-response eeg
12
delirium
9
learning vision
8
delirium monitoring
8
transformer rapid-response
8
predicting delirium
8

Similar Publications

Gastrointestinal (GI) disease examination presents significant challenges to doctors due to the intricate structure of the human digestive system. Colonoscopy and wireless capsule endoscopy are the most commonly used tools for GI examination. However, the large amount of data generated by these technologies requires the expertise and intervention of doctors for disease identification, making manual analysis a very time-consuming task.

View Article and Find Full Text PDF

DNA methylation (DNAm) is a key epigenetic mark that shows profound alterations in cancer. Read-level methylomes enable more in-depth analyses, due to their broad genomic coverage and preservation of rare cell-type signals, compared to summarized data such as 450K/EPIC microarrays. Here, we propose MethylBERT, a Transformer-based model for read-level methylation pattern classification.

View Article and Find Full Text PDF

ResViT FusionNet Model: An explainable AI-driven approach for automated grading of diabetic retinopathy in retinal images.

Comput Biol Med

January 2025

Department of Creative Technologies, Air University, Islamabad, 44000, Pakistan. Electronic address:

Background And Objective: Diabetic Retinopathy (DR) is a serious diabetes complication that can cause blindness if not diagnosed in its early stages. Manual diagnosis by ophthalmologists is labor-intensive and time-consuming, particularly in overburdened healthcare systems. This highlights the need for automated, accurate, and personalized machine learning approaches for early DR detection and treatment.

View Article and Find Full Text PDF

An adversarial transformer for anomalous lamb wave pattern detection.

Neural Netw

January 2025

Department of Mechanical Engineering, University of South Carolina, Columbia, SC 29208, USA. Electronic address:

Lamb waves are widely used for defect detection in structural health monitoring, and various methods are developed for Lamb wave data analysis. This paper presents an unsupervised Adversarial Transformer model for anomalous Lamb wave pattern detection by analyzing the spatiotemporal images generated by a hybrid PZT-scanning laser Doppler vibrometer (SLDV). The model includes the global attention and the local attention mechanisms, and both are trained adversarially.

View Article and Find Full Text PDF

The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!