Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4957551 | PMC |
http://dx.doi.org/10.3109/07434618.2016.1170205 | DOI Listing |
IEEE Trans Neural Syst Rehabil Eng
September 2024
Objective: Speech brain-computer interfaces (speech BCIs), which convert brain signals into spoken words or sentences, have demonstrated great potential for high-performance BCI communication. Phonemes are the basic pronunciation units. For monosyllabic languages such as Chinese Mandarin, where a word usually contains less than three phonemes, accurate decoding of phonemes plays a vital role.
View Article and Find Full Text PDFbioRxiv
September 2024
Department of Neurological Surgery, University of California Davis, Davis, CA.
Brain computer interfaces (BCIs) have the potential to restore communication to people who have lost the ability to speak due to neurological disease or injury. BCIs have been used to translate the neural correlates of attempted speech into text. However, text communication fails to capture the nuances of human speech such as prosody, intonation and immediately hearing one's own voice.
View Article and Find Full Text PDFSensors (Basel)
June 2024
Department of Mechanical Engineering, San Diego State University, San Diego, CA 92182, USA.
This study aims to demonstrate the feasibility of using a new wireless electroencephalography (EEG)-electromyography (EMG) wearable approach to generate characteristic EEG-EMG mixed patterns with mouth movements in order to detect distinct movement patterns for severe speech impairments. This paper describes a method for detecting mouth movement based on a new signal processing technology suitable for sensor integration and machine learning applications. This paper examines the relationship between the mouth motion and the brainwave in an effort to develop nonverbal interfacing for people who have lost the ability to communicate, such as people with paralysis.
View Article and Find Full Text PDFNeurosurgery
February 2025
Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv , Israel.
Background And Objectives: Loss of speech due to injury or disease is devastating. Here, we report a novel speech neuroprosthesis that artificially articulates building blocks of speech based on high-frequency activity in brain areas never harnessed for a neuroprosthesis before: anterior cingulate and orbitofrontal cortices, and hippocampus.
Methods: A 37-year-old male neurosurgical epilepsy patient with intact speech, implanted with depth electrodes for clinical reasons only, silently controlled the neuroprosthesis almost immediately and in a natural way to voluntarily produce 2 vowel sounds.
Sci Rep
May 2024
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, 2-24-16 Naka-cho, Koganei-shi, Tokyo, 184-8588, Japan.
Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!