Publications by authors named "Laura M Getz"

In the contexts of language learning and music processing, hand gestures conveying acoustic information visually influence perception of speech and non-speech sounds (Connell et al., 2013; Morett & Chang, 2015). Currently, it is unclear whether this effect is due to these gestures' use of the human body to highlight relevant features of language (embodiment) or the cross-modal mapping between the visual motion trajectories of these gestures and corresponding auditory features (conceptual metaphor).

View Article and Find Full Text PDF

Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses.

View Article and Find Full Text PDF

Recent advances in cognitive neuroscience have provided a detailed picture of the early time-course of speech perception. In this review, we highlight this work, placing it within the broader context of research on the neurobiology of speech processing, and discuss how these data point us toward new models of speech perception and spoken language comprehension. We focus, in particular, on temporally-sensitive measures that allow us to directly measure early perceptual processes.

View Article and Find Full Text PDF

An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.

View Article and Find Full Text PDF

An audiovisual correspondence (AVC) refers to an observer's seemingly arbitrary yet consistent matching of sensory features across the two modalities; for example, between an auditory pitch and visual size. Research on AVCs has frequently used a speeded classification procedure in which participants are asked to rapidly classify an image when it is either accompanied by a congruent or an incongruent sound (or vice versa). When, as is typically the case, classification is faster in the presence of a congruent stimulus, researchers have inferred that the AVC is automatic and bottom-up.

View Article and Find Full Text PDF

Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition.

View Article and Find Full Text PDF

This paper revisits the conclusion of our previous work regarding the dominance of meaning in the competition between rhythmic parsing and linguistic parsing. We played five-note rhythm patterns in which each sound is a spoken word of a five-word sentence. We asked listeners to indicate the starting point of the rhythm while disregarding which word would normally be heard as the first word of the sentence.

View Article and Find Full Text PDF

We provide a test of Patel's [( 2003 ). Language, music, syntax and the brain. Nature Neuroscience, 6, 674-681] shared syntactic integration resources hypothesis by investigating the competition between determinants of rhythmic parsing and linguistic parsing using a sentence-rhythm Stroop task.

View Article and Find Full Text PDF