Four experiments examined Dutch listeners' use of suprasegmental information in spoken-word recognition. Isolated syllables excised from minimal stress pairs such as VOORnaam/voorNAAM could be reliably assigned to their source words. In lexical decision, no priming was observed from one member of minimal stress pairs to the other, suggesting that the pairs' segmental ambiguity was removed by suprasegmental information. Words embedded in nonsense strings were harder to detect if the nonsense string itself formed the beginning of a competing word, but a suprasegmental mismatch to the competing word significantly reduced this inhibition. The same nonsense strings facilitated recognition of the longer words of which they constituted the beginning, but again the facilitation was significantly reduced by suprasegmental mismatch. Together these results indicate that Dutch listeners effectively exploit suprasegmental cues in recognizing spoken words. Nonetheless, suprasegmental mismatch appears to be somewhat less effective in constraining activation than segmental mismatch.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/00238309010440020301 | DOI Listing |
Brain Sci
December 2024
Faculty of Arts and Humanities, University of Macau, Macau SAR 999078, China.
Background/objectives: Previous studies have examined the role of working memory in cognitive tasks such as syntactic, semantic, and phonological processing, thereby contributing to our understanding of linguistic information management and retrieval. However, the real-time processing of phonological information-particularly in relation to suprasegmental features like tone, where its contour represents a time-varying signal-remains a relatively underexplored area within the framework of Information Processing Theory (IPT). This study aimed to address this gap by investigating the real-time processing of similar tonal information by native Cantonese speakers, thereby providing a deeper understanding of how IPT applies to auditory processing.
View Article and Find Full Text PDFThe speech multi-feature MMN (Mismatch Negativity) offers a means to explore the neurocognitive background of the processing of multiple speech features in a short time, by capturing the time-locked electrophysiological activity of the brain known as event-related brain potentials (ERPs). Originating from Näätänen et al. (Clin Neurophysiol 115:140-144, 2004) pioneering work, this paradigm introduces several infrequent deviant stimuli alongside standard ones, each differing in various speech features.
View Article and Find Full Text PDFWhile consonant acquisition clearly requires mastery of different articulatory configurations (segments), sub-segmental features and suprasegmental contexts influence both order of acquisition and mismatch (error) patterns (Bérubé, Bernhardt, Stemberger & Ciocca, 2020). Constraints-based nonlinear phonology provides a comprehensive framework for investigating the impact of sub- and suprasegmental impacts on acquisition (Bernhardt & Stemberger, 1998). The current study adopted such a framework in order to investigate these questions for Granada Spanish.
View Article and Find Full Text PDFJ Exp Psychol Learn Mem Cogn
February 2023
Departamento de Metodología and ERI-Lectura, Universitat de València.
An often overlooked but fundamental issue for any comprehensive model of visual-word recognition is the representation of diacritical vowels: Do diacritical and nondiacritical vowels share their abstract letter representations? Recent research suggests that the answer is "yes" in languages where diacritics indicate suprasegmental information (e.g., lexical stress, as in cámara ['ka.
View Article and Find Full Text PDFBrain Res
October 2021
Donders Centre for Cognition, Radboud University, Thomas van Aquinostraat 4, 6525 GD Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands. Electronic address:
One of the challenges in speech perception is that listeners must deal with considerable segmental and suprasegmental variability in the acoustic signal due to differences between talkers. Most previous studies have focused on how listeners deal with segmental variability. In this EEG experiment, we investigated whether listeners track talker-specific usage of suprasegmental cues to lexical stress to recognize spoken words correctly.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!