This study was designed to test the iambic/trochaic law, which claims that elements contrasting in duration naturally form rhythmic groupings with final prominence, whereas elements contrasting in intensity form groupings with initial prominence. It was also designed to evaluate whether the iambic/trochaic law describes general auditory biases, or whether rhythmic grouping is speech or language specific. In two experiments, listeners were presented with sequences of alternating /ga/ syllables or square wave segments that varied in either duration or intensity and were asked to indicate whether they heard a trochaic (i.e., strong-weak) or an iambic (i.e., weak-strong) rhythmic pattern. Experiment 1 provided a validation of the iambic/trochaic law in English-speaking listeners; for both speech and nonspeech stimuli, variations in duration resulted in iambic grouping, whereas variations in intensity resulted in trochaic grouping. In Experiment 2, no significant differences were found between the rhythmic-grouping performances of English- and French-speaking listeners. The speech/ nonspeech and cross-language parallels suggest that the perception of linguistic rhythm relies largely on general auditory mechanisms. The applicability of the iambic/trochaic law to speech segmentation is discussed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/bf03194458 | DOI Listing |
J Psycholinguist Res
January 2025
Department of Linguistics, University of Potsdam, Potsdam, Germany.
Rhythm perception in speech and non-speech acoustic stimuli has been shown to be affected by general acoustic biases as well as by phonological properties of the native language of the listener. The present paper extends the cross-linguistic approach in this field by testing the application of the iambic-trochaic law as an assumed general acoustic bias on rhythmic grouping of non-speech stimuli by speakers of three languages: Arabic, Hebrew and German. These languages were chosen due to relevant differences in their phonological properties on the lexical level alongside similarities on the phrasal level.
View Article and Find Full Text PDFThe perception of rhythmic patterns is crucial for the recognition of words in spoken languages, yet it remains unclear how these patterns are represented in the brain. Here, we tested the hypothesis that rhythmic patterns are encoded by neural activity phase-locked to the temporal modulation of these patterns in the speech signal. To test this hypothesis, we analyzed EEGs evoked with long sequences of alternating syllables acoustically manipulated to be perceived as a series of different rhythmic groupings in English.
View Article and Find Full Text PDFJ Acoust Soc Am
February 2023
Department of Linguistics, McGill University, Montréal, Québec H3A 1A7, Canada.
Listeners parse the speech signal effortlessly into words and phrases, but many questions remain about how. One classic idea is that rhythm-related auditory principles play a role, in particular, that a psycho-acoustic "iambic-trochaic law" (ITL) ensures that alternating sounds varying in intensity are perceived as recurrent binary groups with initial prominence (trochees), while alternating sounds varying in duration are perceived as binary groups with final prominence (iambs). We test the hypothesis that the ITL is in fact an indirect consequence of the parsing of speech along two in-principle orthogonal dimensions: prominence and grouping.
View Article and Find Full Text PDFPast research on how listeners weight stress cues such as pitch, duration and intensity has reported two inconsistent patternss: listeners' weighting conforms to 1) their native language experience (e.g., language rhythmicity, lexical tone), and 2) a general "iambic-trochaic law" (ITL), favouring innate sound groupings in cue perception.
View Article and Find Full Text PDFPsychol Rev
March 2022
Department of Linguistics.
In a sequence of otherwise equal sounds, listeners tend to hear a series of trochees (groups of two sounds with an initial beat) when every other sound is louder; they tend to hear a series of iambs (groups of two sounds with a final beat) when every other sound is longer. The article presents evidence that this so-called "Iambic-Trochaic Law" (ITL) is a consequence of the way listeners parse the signal along two orthogonal dimensions, grouping (?) and prominence (?). A production experiment shows that in speech, intensity and duration correlate when encoding prominence, but anticorrelate when encoding grouping.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!