Current web resources provide limited, user friendly tools to compute spectrograms for visualizing and quantifying electroencephalographic (EEG) data. This paper describes a Windows-based, open source code for creating EEG multitaper spectrograms. The compiled program is accessible to Windows users without software licensing. For Macintosh users, the program is limited to those with a MATLAB software license. The program is illustrated via EEG spectrograms that vary as a function of states of sleep and wakefulness, and opiate-induced alterations in those states. The EEGs of C57BL/6J mice were wirelessly recorded for 4 h after intraperitoneal injection of saline (vehicle control) and antinociceptive doses of morphine, buprenorphine, and fentanyl. Spectrograms showed that buprenorphine and morphine caused similar changes in EEG power at 1-3 Hz and 8-9 Hz. Spectrograms after administration of fentanyl revealed maximal average power bands at 3 Hz and 7 Hz. The spectrograms unmasked differential opiate effects on EEG frequency and power. These computer-based methods are generalizable across drug classes and can be readily modified to quantify and display a wide range of rhythmic biological signals.

Download full-text PDF

Source
http://dx.doi.org/10.3791/60333DOI Listing

Publication Analysis

Top Keywords

spectrograms
6
eeg
5
computer-based multitaper
4
multitaper spectrogram
4
program
4
spectrogram program
4
program electroencephalographic
4
electroencephalographic data
4
data current
4
current web
4

Similar Publications

Elephant Sound Classification Using Deep Learning Optimization.

Sensors (Basel)

January 2025

School of Computer Science and Informatics, Cardiff University, Cardiff CF24 3AA, UK.

Elephant sound identification is crucial in wildlife conservation and ecological research. The identification of elephant vocalizations provides insights into the behavior, social dynamics, and emotional expressions, leading to elephant conservation. This study addresses elephant sound classification utilizing raw audio processing.

View Article and Find Full Text PDF

Alzheimer's disease (AD) is a progressive neurodegenerative disorder that poses critical challenges in global healthcare due to its increasing prevalence and severity. Diagnosing AD and other dementias, such as frontotemporal dementia (FTD), is slow and resource-intensive, underscoring the need for automated approaches. To address this gap, this study proposes a novel deep learning methodology for EEG classification of AD, FTD, and control (CN) signals.

View Article and Find Full Text PDF

Diabetes is a chronic condition, and traditional monitoring methods are invasive, significantly reducing the quality of life of the patients. This study proposes the design of an innovative system based on a microcontroller that performs real-time ECG acquisition and evaluates the presence of diabetes using an Edge-AI solution. A spectrogram-based preprocessing method is combined with a 1-Dimensional Convolutional Neural Network (1D-CNN) to analyze the ECG signals directly on the device.

View Article and Find Full Text PDF

Speech Enhancement for Cochlear Implant Recipients using Deep Complex Convolution Transformer with Frequency Transformation.

IEEE/ACM Trans Audio Speech Lang Process

February 2024

CRSS: Center for Robust Speech Systems; Cochlear Implant Processing Laboratory (CILab), Department of Electrical and Computer Engineering, University of Texas at Dallas, USA.

The presence of background noise or competing talkers is one of the main communication challenges for cochlear implant (CI) users in speech understanding in naturalistic spaces. These external factors distort the time-frequency (T-F) content including magnitude spectrum and phase of speech signals. While most existing speech enhancement (SE) solutions focus solely on enhancing the magnitude response, recent research highlights the importance of phase in perceptual speech quality.

View Article and Find Full Text PDF

Transformer-based neural speech decoding from surface and depth electrode signals.

J Neural Eng

January 2025

Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, New York, New York, 10012-1126, UNITED STATES.

This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!