Publications by authors named "Jonathan S Brumberg"

Augmentative and alternative communication (AAC) techniques can provide access to communication for individuals with severe physical impairments. Brain-computer interface (BCI) access techniques may serve alongside existing AAC access methods to provide communication device control. However, there is limited information available about how individual perspectives change with motor-based BCI-AAC learning.

View Article and Find Full Text PDF

Purpose This study investigated whether changes in brain activity preceding spoken words can be used as a neural marker of speech intention. Specifically, changes in the contingent negative variation (CNV) were examined prior to speech production in three different study designs to determine a method that maximizes signal detection in a speaking task. Method Electroencephalography data were collected in three different protocols to elicit the CNV in a spoken word task that varied the timing and type of linguistic information.

View Article and Find Full Text PDF

Current BCI-AAC systems largely utilize custom-made software and displays that may be unfamiliar to AAC stakeholders. Further, there is limited information available exploring the heterogenous profiles of individuals who may use BCI-AAC. Therefore, in this study, we aimed to evaluate how individuals with amyotrophic lateral sclerosis (ALS) learned to control a motor-based BCI switch in a row-column AAC scanning pattern, and person-centered factors associated with BCI-AAC performance.

View Article and Find Full Text PDF

Purpose: Brain-computer interface (BCI) techniques may provide computer access for individuals with severe physical impairments. However, the relatively hidden nature of BCI control obscures how BCI systems work behind the scenes, making it difficult to understand electroencephalography (EEG) records the BCI related brain signals, brain signals are recorded by EEG, and these signals are targeted for BCI control. Furthermore, in the field of speech-language-hearing, signals targeted for BCI application have been of primary interest to clinicians and researchers in the area of augmentative and alternative communication (AAC).

View Article and Find Full Text PDF

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech.

View Article and Find Full Text PDF

Purpose Speech motor control relies on neural processes for generating sensory expectations using an efference copy mechanism to maintain accurate productions. The N100 auditory event-related potential (ERP) has been identified as a possible neural marker of the efference copy with a reduced amplitude during active listening while speaking when compared to passive listening. This study investigates N100 suppression while controlling a motor imagery speech synthesizer brain-computer interface (BCI) with instantaneous auditory feedback to determine whether similar mechanisms are used for monitoring BCI-based speech output that may both support BCI learning through existing speech motor networks and be used as a clinical marker for the speech network integrity in individuals without severe speech and physical impairments.

View Article and Find Full Text PDF

Purpose: Brain-computer interfaces (BCIs) aim to provide access to augmentative and alternative communication (AAC) devices via brain activity alone. However, while BCI technology is expanding in the laboratory setting there is minimal incorporation into clinical practice. Building upon established AAC research and clinical best practices may aid the clinical translation of BCI practice, allowing advancements in both fields to be fully leveraged.

View Article and Find Full Text PDF

Speech technology applications have emerged as a promising method for assessing speech-language abilities and at-home therapy, including prosody. Many applications assume that observed prosody errors are due to an underlying disorder; however, they may be instead due to atypical representations of prosody such as immature and developing speech motor control, or compensatory adaptations by those with congenital neuromotor disorders. The result is the same - vocal productions may not be a reliable measure of prosody knowledge.

View Article and Find Full Text PDF

Purpose: Brain-computer interfaces (BCIs) can provide access to augmentative and alternative communication (AAC) devices using neurological activity alone without voluntary movements. As with traditional AAC access methods, BCI performance may be influenced by the cognitive-sensory-motor and motor imagery profiles of those who use these devices. Therefore, we propose a person-centered, feature matching framework consistent with clinical AAC best practices to ensure selection of the most appropriate BCI technology to meet individuals' communication needs.

View Article and Find Full Text PDF

We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ textipa i/ [heed], / textipa A/ [hot], and / textipa u/ [who'd]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual (AV) feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer, and visual feedback was given as a 2-D cursor on a graphical representation of the plane defined by the first two formant frequencies.

View Article and Find Full Text PDF

Purpose: We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance.

View Article and Find Full Text PDF

Purpose: Brain-computer interfaces (BCIs) have the potential to improve communication for people who require but are unable to use traditional augmentative and alternative communication (AAC) devices. As BCIs move toward clinical practice, speech-language pathologists (SLPs) will need to consider their appropriateness for AAC intervention.

Method: This tutorial provides a background on BCI approaches to provide AAC specialists foundational knowledge necessary for clinical application of BCI.

View Article and Find Full Text PDF

How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech.

View Article and Find Full Text PDF

Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.

View Article and Find Full Text PDF

The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas.

View Article and Find Full Text PDF

In this paper we present a framework for reducing the development time needed for creating applications for use in non-invasive brain-computer interfaces (BCI). Our framework is primarily focused on facilitating rapid software "app" development akin to current efforts in consumer portable computing (e.g.

View Article and Find Full Text PDF

This paper reports on studies involving brain-machine interfaces (BMIs) that provide near-instantaneous audio feedback from a speech synthesizer to the BMI user. In one study, neural signals recorded by an intracranial electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome were transmitted wirelessly across the scalp and used to drive a formant synthesizer, allowing the user to produce vowels. In a second, pilot study, a neurologically normal user was able to drive the formant synthesizer with imagined movements detected using electroencephalography.

View Article and Find Full Text PDF

We conducted a neurophysiological study of attempted speech production in a paralyzed human volunteer using chronic microelectrode recordings. The volunteer suffers from locked-in syndrome leaving him in a state of near-total paralysis, though he maintains good cognition and sensation. In this study, we investigated the feasibility of supervised classification techniques for prediction of intended phoneme production in the absence of any overt movements including speech.

View Article and Find Full Text PDF

Brain-computer interfaces (BCIs) have been developed over the past decade to restore communication to persons with severe paralysis. In the most severe cases of paralysis, known as locked-in syndrome, patients retain cognition and sensation, but are capable of only slight voluntary eye movements. For these patients, no standard communication method is available, although some can use BCIs to communicate by selecting letters or words on a computer.

View Article and Find Full Text PDF

This paper briefly reviews current silent speech methodologies for normal and disabled individuals. Current techniques utilizing electromyographic (EMG) recordings of vocal tract movements are useful for physically healthy individuals but fail for tetraplegic individuals who do not have accurate voluntary control over the speech articulators. Alternative methods utilizing EMG from other body parts (e.

View Article and Find Full Text PDF

Background: Brain-machine interfaces (BMIs) involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech.

View Article and Find Full Text PDF