There have been numerous studies investigating the perception of non-native sounds by listeners with different first language (L1) backgrounds. However, research needs to expand to under-researched languages and incorporate predictions conducted under the assumptions of new speech models. This study aimed to investigate the perception of Dutch vowels by Cypriot Greek adult listeners and test the predictions of cross-linguistic acoustic and perceptual similarity. The predictions of acoustic similarity were formed using a machine-learning algorithm. Listeners completed a classification test, which served as the baseline for developing the predictions of perceptual similarity by employing the framework of the Universal Perceptual Model (UPM), and an AXB discrimination test; the latter allowed the evaluation of both acoustic and perceptual predictions. The findings indicated that listeners classified each non-native vowel as one or more L1 vowels, while the discrimination accuracy over the non-native contrasts was moderate. In addition, cross-linguistic acoustic similarity predicted to a large extent the classification of non-native sounds in terms of L1 categories and both the acoustic and perceptual similarity predicted the discrimination accuracy of all contrasts. Being in line with prior findings, these findings demonstrate that acoustic and perceptual cues are reliable predictors of non-native contrast discrimination and that the UPM model can make accurate estimations for the discrimination patterns of non-native listeners.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584718PMC
http://dx.doi.org/10.3758/s13414-023-02781-7DOI Listing

Publication Analysis

Top Keywords

acoustic perceptual
20
perceptual similarity
12
perception dutch
8
dutch vowels
8
vowels cypriot
8
cypriot greek
8
non-native sounds
8
cross-linguistic acoustic
8
acoustic similarity
8
discrimination accuracy
8

Similar Publications

Beta oscillations predict the envelope sharpness in a rhythmic beat sequence.

Sci Rep

January 2025

RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, Oslo, 0373, Norway.

Periodic sensory inputs entrain oscillatory brain activity, reflecting a neural mechanism that might be fundamental to temporal prediction and perception. Most environmental rhythms and patterns in human behavior, such as walking, dancing, and speech do not, however, display strict isochrony but are instead quasi-periodic. Research has shown that neural tracking of speech is driven by modulations of the amplitude envelope, especially via sharp acoustic edges, which serve as prominent temporal landmarks.

View Article and Find Full Text PDF

Wide dynamic range compression (WDRC) and noise reduction both play important roles in hearing aids. WDRC provides level-dependent amplification so that the level of sound produced by the hearing aid falls between the hearing threshold and the highest comfortable level of the listener, while noise reduction reduces ambient noise with the goal of improving intelligibility and listening comfort and reducing effort. In most current hearing aids, noise reduction and WDRC are implemented sequentially, but this may lead to distortion of the amplitude modulation patterns of both the speech and the noise.

View Article and Find Full Text PDF

Phantom perceptions like tinnitus occur without any identifiable environmental or bodily source. The mechanisms and key drivers behind tinnitus are poorly understood. The dominant framework, suggesting that tinnitus results from neural hyperactivity in the auditory pathway following hearing damage, has been difficult to investigate in humans and has reached explanatory limits.

View Article and Find Full Text PDF

Distraction is ubiquitous in human environments. Distracting input is often predictable, but we do not understand when or how humans can exploit this predictability. Here, we ask whether predictable distractors are able to reduce uncertainty in updating the internal predictive model.

View Article and Find Full Text PDF

Perceptual learning of modulation filtered speech.

J Exp Psychol Hum Percept Perform

January 2025

School of Psychology, University of Sussex.

Human listeners have a remarkable capacity to adapt to severe distortions of the speech signal. Previous work indicates that perceptual learning of degraded speech reflects changes to sublexical representations, though the precise format of these representations has not yet been established. Inspired by the neurophysiology of auditory cortex, we hypothesized that perceptual learning involves changes to perceptual representations that are tuned to acoustic modulations of the speech signal.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!