The iambic-trochaic law without iambs or trochees: Parsing speech for grouping and prominence.

J Acoust Soc Am

Department of Linguistics, McGill University, Montréal, Québec H3A 1A7, Canada.

Published: February 2023

Listeners parse the speech signal effortlessly into words and phrases, but many questions remain about how. One classic idea is that rhythm-related auditory principles play a role, in particular, that a psycho-acoustic "iambic-trochaic law" (ITL) ensures that alternating sounds varying in intensity are perceived as recurrent binary groups with initial prominence (trochees), while alternating sounds varying in duration are perceived as binary groups with final prominence (iambs). We test the hypothesis that the ITL is in fact an indirect consequence of the parsing of speech along two in-principle orthogonal dimensions: prominence and grouping. Results from several perception experiments show that the two dimensions, prominence and grouping, are each reliably cued by both intensity and duration, while foot type is not associated with consistent cues. The ITL emerges only when one manipulates either intensity or duration in an extreme way. Overall, the results suggest that foot perception is derivative of the cognitively more basic decisions of grouping and prominence, and the notions of trochee and iamb may not play any direct role in speech parsing. A task manipulation furthermore gives new insight into how these decisions mutually inform each other.

Download full-text PDF

Source
http://dx.doi.org/10.1121/10.0017170DOI Listing

Publication Analysis

Top Keywords

parsing speech
8
grouping prominence
8
alternating sounds
8
sounds varying
8
binary groups
8
dimensions prominence
8
prominence grouping
8
intensity duration
8
prominence
6
iambic-trochaic law
4

Similar Publications

Dynamical theories of speech processing propose that the auditory cortex parses acoustic information in parallel at the syllabic and phonemic timescales. We developed a paradigm to independently manipulate both linguistic timescales, and acquired intracranial recordings from 11 patients who are epileptic listening to French sentences. Our results indicate that (i) syllabic and phonemic timescales are both reflected in the acoustic spectral flux; (ii) during comprehension, the auditory cortex tracks the syllabic timescale in the theta range, while neural activity in the alpha-beta range phase locks to the phonemic timescale; (iii) these neural dynamics occur simultaneously and share a joint spatial location; (iv) the spectral flux embeds two timescales-in the theta and low-beta ranges-across 17 natural languages.

View Article and Find Full Text PDF

Concurrent processing of the prosodic hierarchy is supported by cortical entrainment and phase-amplitude coupling.

Cereb Cortex

December 2024

Institute for the Interdisciplinary Study of Language Evolution, University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland.

Models of phonology posit a hierarchy of prosodic units that is relatively independent from syntactic structure, requiring its own parsing. It remains unexplored how this prosodic hierarchy is represented in the brain. We investigated this foundational question by means of an electroencephalography (EEG) study.

View Article and Find Full Text PDF
Article Synopsis
  • Social media has both benefits, like easy communication and information sharing, and drawbacks, particularly the prevalence of hate speech.
  • The study focuses on using a TF-IDF approach and various classifiers to automatically detect hate speech across three different datasets.
  • Several machine learning methods, including logistic regression, neural networks, and random forests, were employed, achieving over 99% accuracy in identifying hate speech.
View Article and Find Full Text PDF

Purpose: Prior research introduced quantifiable effects of three methodological parameters (number of repetitions, stimulus length, and parsing error) on the spatiotemporal index (STI) using simulated data. Critically, these parameters often vary across studies. In this study, we validate these effects, which were previously only demonstrated via simulation, using children's speech data.

View Article and Find Full Text PDF
Article Synopsis
  • The study investigated both peripheral (basic hearing ability) and central (speech processing ability) hearing in different dementia patients compared to healthy individuals.
  • Findings revealed that while central hearing (measured through dichotic listening) was significantly impaired in dementia patients, peripheral hearing (measured with pure-tone audiometry) showed no notable difference from healthy controls.
  • The results suggest a critical link between central hearing abilities and cognitive functioning in dementia, emphasizing the need to assess both types of hearing to better understand and address the auditory challenges faced by these patients.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!