This study investigated the potential of estimating various mental workload levels during two different tasks using a commercial in-ear electroencephalography (EEG) system, the IDUN 'Guardian'.Participants performed versions of two classical workload tasks: an n-back task and a mental arithmetic task. Both in-ear and conventional EEG data were simultaneously collected during these tasks.
View Article and Find Full Text PDFComput Biol Med
February 2024
Multimodal neuroimaging using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) provides complementary views of cortical processes, including those related to auditory processing. However, current multimodal approaches often overlook potential insights that can be gained from nonlinear interactions between electrical and hemodynamic signals. Here, we explore electro-vascular phase-amplitude coupling (PAC) between low-frequency hemodynamic and high-frequency electrical oscillations during an auditory task.
View Article and Find Full Text PDFAutomatic wheelchairs directly controlled by brain activity could provide autonomy to severely paralyzed individuals. Current approaches mostly rely on non-invasive measures of brain activity and translate individual commands into wheelchair movements. For example, an imagined movement of the right hand would steer the wheelchair to the right.
View Article and Find Full Text PDFRecent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output.
View Article and Find Full Text PDFBrain Comput Interfaces (Abingdon)
May 2022
IEEE Trans Neural Syst Rehabil Eng
October 2022
Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2022
Recent studies have shown it is possible to decode and synthesize speech directly using brain activity recorded from implanted electrodes. While this activity has been extensively examined using electrocorticographic (ECoG) recordings from cortical surface grey matter, stereotactic electroen-cephalography (sEEG) provides comparatively broader coverage and access to deeper brain structures including both grey and white matter. The present study examines the relative and joint contributions of grey and white matter electrodes for speech activity detection in a brain-computer interface.
View Article and Find Full Text PDFBrain Comput Interfaces (Abingdon)
February 2022
The Eighth International Brain-Computer Interface (BCI) Meeting was held June 7-9th, 2021 in a virtual format. The conference continued the BCI Meeting series' interactive nature with 21 workshops covering topics in BCI (also called brain-machine interface) research. As in the past, workshops covered the breadth of topics in BCI.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
November 2021
Neurological disorders can lead to significant impairments in speech communication and, in severe cases, cause the complete loss of the ability to speak. Brain-Computer Interfaces have shown promise as an alternative communication modality by directly transforming neural activity of speech processes into a textual or audible representations. Previous studies investigating such speech neuroprostheses relied on electrocorticography (ECoG) or microelectrode arrays that acquire neural signals from superficial areas on the cortex.
View Article and Find Full Text PDFSpeech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity.
View Article and Find Full Text PDFRhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination.
View Article and Find Full Text PDFStereotactic electroencephalogaphy (sEEG) utilizes localized, penetrating depth electrodes to measure electrophysiological brain activity. It is most commonly used in the identification of epileptogenic zones in cases of refractory epilepsy. The implanted electrodes generally provide a sparse sampling of a unique set of brain regions including deeper brain structures such as hippocampus, amygdala and insula that cannot be captured by superficial measurement modalities such as electrocorticography (ECoG).
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2019
The integration of electroencephalogram (EEG) sensors into virtual reality (VR) headsets can provide the capability of tracking the user's cognitive state and eventually be used to increase the sense of immersion. Recent developments in wireless, room-scale VR tracking allow users to move freely in the physical and virtual spaces. Such motion can create significant movement artifacts in EEG sensors mounted to the VR headset.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2019
Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2019
Virtual Reality (VR) has emerged as a novel paradigm for immersive applications in training, entertainment, rehabilitation, and other domains. In this paper, we investigate the automatic classification of mental workload from brain activity measured through functional near-infrared spectroscopy (fNIRS) in VR. We present results from a study which implements the established n-back task in an immersive visual scene, including physical interaction.
View Article and Find Full Text PDFNeural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output.
View Article and Find Full Text PDFBioenergetic function is characterized with assays obtained by polarographic systems. Analog systems without data acquisition, visualization, and processing tools are used but require cumbersome operations to derive respiration rate and ADP to oxygen stoichiometry of oxidative phosphorylation (ADP/O ratio). The analog signal of a polarograhic system (YSI-5300) was digitized and a graphical user interface (GUI) was developed in MATLAB to integrate visualization, recording, calibration and processing of bioenergetic data.
View Article and Find Full Text PDFWith the recent surge of affordable, high-performance virtual reality (VR) headsets, there is unlimited potential for applications ranging from education, to training, to entertainment, to fitness and beyond. As these interfaces continue to evolve, passive user-state monitoring can play a key role in expanding the immersive VR experience, and tracking activity for user well-being. By recording physiological signals such as the electroencephalogram (EEG) during use of a VR device, the user's interactions in the virtual environment could be adapted in real-time based on the user's cognitive state.
View Article and Find Full Text PDFObjective: Brain-computer interface (BCI) technology enables people to use direct measures of brain activity for communication and control. The National Center for Adaptive Neurotechnologies and Helen Hayes Hospital are studying long-term independent home use of P300-based BCIs by people with amyotrophic lateral sclerosis (ALS). This BCI use takes place without technical oversight, and users can encounter substantial variation in their day-to-day BCI performance.
View Article and Find Full Text PDFObjective: Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood.
View Article and Find Full Text PDFInattentional blindness is a failure to notice an unexpected event when attention is directed elsewhere. The current study examined participants' awareness of an unexpected object that maintained luminance contrast, switched the luminance once, or repetitively flashed. One hundred twenty participants performed a dynamic tracking task on a computer monitor for which they were instructed to count the number of movement deflections of an attended set of objects while ignoring other objects.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
August 2016
Most current Brain-Computer Interfaces (BCIs) achieve high information transfer rates using spelling paradigms based on stimulus-evoked potentials. Despite the success of this interfaces, this mode of communication can be cumbersome and unnatural. Direct synthesis of speech from neural activity represents a more natural mode of communication that would enable users to convey verbal messages in real-time.
View Article and Find Full Text PDF