Phonetic convergence describes when a listener's speech becomes subtly more like the speech of a talker they hear. There are many possible reasons why phonetic convergence occurs. Here, we test whether phonetic convergence can facilitate speech perception. A group of adult native-English speaking participants (n = 9) were asked to identify words-in-noise generated from a group of talkers who either: (a) shadowed the speech of the participant (said out loud words they heard - Associated Shadowers) or (b) shadowed the speech of a different participant (Unassociated Shadowers). A separate group of raters (n = 45) performed an AXB similarity-matching task to confirm that Associated Shadowers sounded more like the participant they had shadowed than Unassociated Shadowers. We found that participants more accurately identified the speech of their Associated Shadowers and their accuracy for identifying the speech of their Associated Shadowers was positively related to the rated similarity of their speech. The results support theoretical accounts suggesting that phonetic convergence may facilitate speech understanding between individuals.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/s13414-025-03041-6 | DOI Listing |
Atten Percept Psychophys
March 2025
Department of Psychology, University of California, Riverside, CA, USA.
Phonetic convergence describes when a listener's speech becomes subtly more like the speech of a talker they hear. There are many possible reasons why phonetic convergence occurs. Here, we test whether phonetic convergence can facilitate speech perception.
View Article and Find Full Text PDFBrain
March 2025
Department of Speech, Language, and Hearing Sciences, Department of Neurology, The University of Texas at Austin, USA.
bioRxiv
February 2025
Department of Electrical Engineering, Columbia University, New York, NY, USA.
The human brain's ability to transform acoustic speech signals into rich linguistic representations has inspired advancements in automatic speech recognition (ASR) systems. While ASR systems now achieve human-level performance under controlled conditions, prior research on their parallels with the brain has been limited by the use of biologically implausible models, narrow feature sets, and comparisons that primarily emphasize predictability of brain activity without fully exploring shared underlying representations. Additionally, studies comparing the brain to text-based language models overlook the acoustic stages of speech processing, an essential part in transforming sound to meaning.
View Article and Find Full Text PDFDyslexia
February 2025
Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, Department of Learning Disabilities, University of Haifa, Haifa, Israel.
While the multiple cognitive deficits model of reading difficulties (RD) is widely supported, different cognitive-linguistic deficits may manifest differently depending on language and writing system characteristics. This study examined cognitive-linguistic profiles underlying RD in Hebrew, characterised by rich Semitic morphology and two writing versions differing in orthographic consistency-a transparent-pointed version and a deep-unpointed version. A two-step cluster analysis grouped 96 s graders and 81 fourth graders based on their phonological awareness (PA), rapid naming (RAN), orthographic knowledge (OK) and morphological-pattern identification (MPI) abilities.
View Article and Find Full Text PDFJASA Express Lett
December 2024
Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, United Kingdom.
The Iowa Test of Consonant Perception is a single-word closed-set speech-in-noise test with well-balanced phonetic features. The current study aimed to establish a U.K.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!