To comprehend speech, human brains identify meaningful units in the speech stream. But whereas the English '' has 3 word-units, the Arabic equivalent '' is a single word-unit with 3 meaningful sub-word units, called morphemes: a verb stem (''), a subject suffix ('--'), and a direct object pronoun ('-'). It remains unclear whether and how the brain processes morphemes, above and beyond other language units, during speech comprehension. Here, we propose and test hierarchically-nested encoding models of speech comprehension: a naïve model with word-, syllable-, and sound-level information; a bottom-up model with additional morpheme boundary information; and predictive models that process morphemes before these boundaries. We recorded magnetoencephalography (MEG) data as 27 participants (16 female) listened to Arabic sentences like ''. A temporal response function (TRF) analysis revealed that in temporal and left inferior frontal regions predictive models outperform the bottom-up model, which outperforms the naïve model. Moreover, verb stems were either length-ambiguous (e.g., '' could initially be mistaken for the shorter stem ''='') or length-unambiguous (e.g., ''='' cannot be mistaken for a shorter stem), but shared a uniqueness point, beyond which stem identity is fully disambiguated. Evoked analyses revealed differences between conditions before the uniqueness point, suggesting that, rather than await disambiguation, the brain employs proactive predictive strategies, processing accumulated input as soon as any possible stem is identifiable, even if not uniquely. These findings highlight the role of morphemes in speech, and the importance of including morpheme-level information in neural and computational models of speech comprehension. Many leading models of speech comprehension include information about words, syllables and sounds. But languages vary considerably in the amount of meaning packed into word units. This work proposes speech comprehension models with information about meaningful sub-word units, called morphemes (e.g., '' and '' in ''), and shows that they explain significantly more neural activity than models without morpheme information. We also show how the brain predictively processes morphemic information. These findings highlight the role of morphemes in speech comprehension and emphasize the contributions of morpheme-level information-theoretic metrics, like surprisal and entropy. Our findings can be used to update current neural, cognitive, and computational models of speech comprehension, and constitute a step towards refining those models for naturalistic, connected speech.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1523/JNEUROSCI.0781-24.2024 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!