Purpose: The practice of removing "following" responses from speech perturbation analyses is increasingly common, despite no clear evidence as to whether these responses represent a unique response type. This study aimed to determine if the distribution of responses to auditory perturbation paradigms represents a bimodal distribution, consisting of two distinct response types, or a unimodal distribution.
Method: This mega-analysis pooled data from 22 previous studies to examine the distribution and magnitude of responses to auditory perturbations across four tasks: adaptive pitch, adaptive formant, reflexive pitch, and reflexive formant.
Generalization in motor control is the extent to which motor learning affects movements in situations different than those in which it originally occurred. Recent data on orofacial speech movements indicates that motor sequence learning generalizes to novel syllable sequences containing phonotactically illegal, but previously practiced, consonant clusters. Practicing an entire syllable, however, results in even larger performance gains compared to practicing just its clusters.
View Article and Find Full Text PDFBackground: Reflexive pitch perturbation experiments are commonly used to investigate the neural mechanisms underlying vocal motor control. In these experiments, the fundamental frequency-the acoustic correlate of pitch-of a speech signal is shifted unexpectedly and played back to the speaker via headphones in near real-time. In response to the shift, speakers increase or decrease their fundamental frequency in the direction opposing the shift so that their perceived pitch is closer to what they intended.
View Article and Find Full Text PDFJ Speech Lang Hear Res
July 2020
Purpose To better define the contributions of somatosensory and auditory feedback in vocal motor control, a laryngeal perturbation experiment was conducted with and without masking of auditory feedback. Method Eighteen native speakers of English produced a sustained vowel while their larynx was physically and externally displaced on a subset of trials. For the condition with auditory masking, speech-shaped noise was played via earphones at 90 dB SPL.
View Article and Find Full Text PDFEfficient speech communication requires rapid, fluent production of phoneme sequences. To achieve this, our brains store frequently occurring subsequences as cohesive "chunks" that reduce phonological working memory load and improve motor performance. The current study used a motor-sequence learning paradigm in which the generalization of two performance gains (utterance duration and errors) from practicing novel phoneme sequences was used to infer the nature of these speech chunks.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
July 2016
Many proposed EEG-based brain-computer interfaces (BCIs) make use of visual stimuli to elicit steady-state visual evoked potentials (SSVEP), the frequency of which can be mapped to a computer input. However, such a control scheme can be ineffective if a user has no motor control over their eyes and cannot direct their gaze towards a flashing stimulus to generate such a signal. Tactile-based methods, such as somatosensory steady-state evoked potentials (SSSEP), are a potentially attractive alternative in these scenarios.
View Article and Find Full Text PDF