Publications by authors named "Patti Adank"

Observing actions evokes an automatic imitative response that activates mechanisms required to execute these actions. Automatic imitation is measured using the Stimulus Response Compatibility (SRC) task, which presents participants with compatible and incompatible prompt-distractor pairs. Automatic imitation, or the compatibility effect, is the difference in response times (RTs) between incompatible and compatible trials.

View Article and Find Full Text PDF

Simulation accounts of speech perception posit that speech is covertly imitated to support perception in a top-down manner. Behaviourally, covert imitation is measured through the stimulus-response compatibility (SRC) task. In each trial of a speech SRC task, participants produce a target speech sound whilst perceiving a speech distractor that either matches the target (compatible condition) or does not (incompatible condition).

View Article and Find Full Text PDF

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention.

View Article and Find Full Text PDF

Observing someone perform an action automatically activates neural substrates associated with executing that action. This covert response, or automatic imitation, is measured behaviourally using the stimulus-response compatibility (SRC) task. In an SRC task, participants are presented with compatible and incompatible response-distractor pairings (e.

View Article and Find Full Text PDF

Motor areas for speech production activate during speech perception. Such activation may assist speech perception in challenging listening conditions. It is not known how ageing affects the recruitment of articulatory motor cortex during active speech perception.

View Article and Find Full Text PDF

Purpose Visual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e.

View Article and Find Full Text PDF

Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences.

View Article and Find Full Text PDF

Listening to degraded speech is associated with decreased intelligibility and increased effort. However, listeners are generally able to adapt to certain types of degradations. While intelligibility of degraded speech is modulated by talker acoustics, it is unclear whether talker acoustics also affect effort and adaptation.

View Article and Find Full Text PDF

Few studies thus far have investigated whether perception of distorted speech is consistent across different types of distortion. This study investigated whether participants show a consistent perceptual profile across three speech distortions: time-compressed, noise-vocoded, and speech in noise. Additionally, this study investigated whether/how individual differences in performance on a battery of audiological and cognitive tasks links to perception.

View Article and Find Full Text PDF

Successful perception of speech in everyday listening conditions requires effective listening strategies to overcome common acoustic distortions, such as background noise. Convergent evidence from neuroimaging and clinical studies identify activation within the temporal lobes as key to successful speech perception. However, current neurobiological models disagree on whether the left temporal lobe is sufficient for successful speech perception or whether bilateral processing is required.

View Article and Find Full Text PDF

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as . Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g.

View Article and Find Full Text PDF

Motor imagery refers to the phenomenon of imagining performing an action without action execution. Motor imagery and motor execution are assumed to share a similar underlying neural system that involves primary motor cortex (M1). Previous studies have focused on motor imagery of manual actions, but articulatory motor imagery has not been investigated.

View Article and Find Full Text PDF

The observation-execution links underlying automatic-imitation processes are suggested to result from associative sensorimotor experience of performing and watching the same actions. Past research supporting the associative sequence learning (ASL) model has demonstrated that sensorimotor training modulates automatic imitation of perceptually transparent manual actions, but ASL has been criticized for not being able to account for opaque actions, such as orofacial movements that include visual speech. To investigate whether the observation-execution links underlying opaque actions are as flexible as has been demonstrated for transparent actions, we tested whether sensorimotor training modulated the automatic imitation of visual speech.

View Article and Find Full Text PDF

This study aimed to characterize effects of coil orientation on the size of Motor Evoked Potentials (MEPs) from both sides of Orbicularis Oris (OO) and both First Dorsal Interosseous (FDI) muscles, following stimulation to left lip and left hand Primary Motor Cortex. Using a 70 mm figure-of-eight coil, we collected MEPs from eight different orientations while recording from contralateral and ipsilateral OO and FDI using a monophasic pulse delivered at 120% active motor threshold. MEPs from OO were evoked consistently for six orientations for contralateral and ipsilateral sites.

View Article and Find Full Text PDF

When we observe someone else speaking, we tend to automatically activate the corresponding speech motor patterns. When listening, we therefore covertly imitate the observed speech. Simulation theories of speech perception propose that covert imitation of speech motor patterns supports speech perception.

View Article and Find Full Text PDF

Primary motor (M1) areas for speech production activate during speechperception. It has been suggested that such activation may be dependent upon modulatory inputs from premotor cortex (PMv). If and how PMv differentially modulates M1 activity during perception of speech that is easy or challenging to understand, however, is unclear.

View Article and Find Full Text PDF

Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g.

View Article and Find Full Text PDF

Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding.

View Article and Find Full Text PDF

Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal.

View Article and Find Full Text PDF

It has become increasingly evident that human motor circuits are active during speech perception. However, the conditions under which the motor system modulates speech perception are not clear. Two prominent accounts make distinct predictions for how listening to speech engages speech motor representations.

View Article and Find Full Text PDF

The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009).

View Article and Find Full Text PDF

Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise.

View Article and Find Full Text PDF

The present study investigated the effects of inhibition, vocabulary knowledge, and working memory on perceptual adaptation to accented speech. One hundred young, normal-hearing adults listened to sentences spoken in a constructed, unfamiliar accent presented in speech-shaped background noise. Speech Reception Thresholds (SRTs) corresponding to 50% speech recognition accuracy provided a measurement of adaptation to the accented speech.

View Article and Find Full Text PDF