Sleep-stage classification is essential for sleep research. Various automatic judgment programs, including deep learning algorithms using artificial intelligence (AI), have been developed, but have limitations with regard to data format compatibility, human interpretability, cost, and technical requirements. We developed a novel program called GI-SleepNet, generative adversarial network (GAN)-assisted image-based sleep staging for mice that is accurate, versatile, compact, and easy to use. In this program, electroencephalogram and electromyography data are first visualized as images, and then classified into three stages (wake, NREM, and REM) by a supervised image learning algorithm. To increase its accuracy, we adopted GAN and artificially generated fake REM sleep data to equalize the number of stages. This resulted in improved accuracy, and as little as one mouse's data yielded significant accuracy. Due to its image-based nature, the program is easy to apply to data of different formats, different species of animals, and even outside sleep research. Image data can be easily understood; thus, confirmation by experts is easily obtained, even when there are prediction anomalies. As deep learning in image processing is one of the leading fields in AI, numerous algorithms are also available.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8628800PMC
http://dx.doi.org/10.3390/clockssleep3040041DOI Listing

Publication Analysis

Top Keywords

deep learning
12
image-based sleep
8
learning algorithm
8
data
6
sleep
5
gi-sleepnet highly
4
highly versatile
4
versatile image-based
4
sleep classification
4
classification deep
4

Similar Publications

Predicting transcriptional changes induced by molecules with MiTCP.

Brief Bioinform

November 2024

Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai 200240, China.

Studying the changes in cellular transcriptional profiles induced by small molecules can significantly advance our understanding of cellular state alterations and response mechanisms under chemical perturbations, which plays a crucial role in drug discovery and screening processes. Considering that experimental measurements need substantial time and cost, we developed a deep learning-based method called Molecule-induced Transcriptional Change Predictor (MiTCP) to predict changes in transcriptional profiles (CTPs) of 978 landmark genes induced by molecules. MiTCP utilizes graph neural network-based approaches to simultaneously model molecular structure representation and gene co-expression relationships, and integrates them for CTP prediction.

View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop and validate a deep-learning model for noninvasive anemia detection, hemoglobin (Hb) level estimation, and identification of anemia-related retinal features using fundus images.

Methods: The dataset included 2265 participants aged 40 years and above from a population-based study in South India. The dataset included ocular and systemic clinical parameters, dilated retinal fundus images, and hematological data such as complete blood counts and Hb concentration levels.

View Article and Find Full Text PDF

Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test.

Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model.

View Article and Find Full Text PDF

Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive the world in the absence of active physiology, deliberative thought, or any form of feedback that resembles human affective experience offer tools to demystify the relationship between seeing and feeling, and to assess how much of visually evoked affective experiences may be a straightforward function of representation learning over natural image statistics. In this work, we deploy a diverse sample of 180 state-of-the-art deep neural network models trained only on canonical computer vision tasks to predict human ratings of arousal, valence, and beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets.

View Article and Find Full Text PDF

Pancreatic neuroendocrine tumors (PanNETs) are a heterogeneous group of neoplasms that include tumors with different histomorphologic characteristics that can be correlated to sub-categories with different prognoses. In addition to the WHO grading scheme based on tumor proliferative activity, a new parameter based on the scoring of infiltration patterns at the interface of tumor and non-neoplastic parenchyma (tumor-NNP interface) has recently been proposed for PanNET categorization. Despite the known correlations, these categorizations can still be problematic due to the need for human judgment, which may involve intra- and inter-observer variability.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!