Background/aims: Lexically guided perceptual learning in speech is the updating of linguistic categories based on novel input disambiguated by the structure provided in a recognized lexical item. We test the range of variation that allows for perceptual learning by presenting listeners with items that vary from subtle within-category variation to fully remapped cross-category variation.

Methods: Experiment 1 uses a lexically guided perceptual learning paradigm with words containing noncanonical /s/ realizations from s/ʃ continua that correspond to "typical," "ambiguous," "atypical," and "remapped" steps. Perceptual learning is tested in an s/ʃ categorization task. Experiment 2 addresses listener sensitivity to variation in the exposure items using AX discrimination tasks.

Results: Listeners in experiment 1 showed perceptual learning with the maximally ambiguous tokens. Performance of listeners in experiment 2 suggests that tokens which showed the most perceptual learning were not perceptually salient on their own.

Conclusion: These results demonstrate that perceptual learning is enhanced with maximally ambiguous stimuli. Excessively atypical pronunciations show attenuated perceptual learning, while typical pronunciations show no evidence for perceptual learning. AX discrimination illustrates that the maximally ambiguous stimuli are not perceptually unique. Together, these results suggest that perceptual learning relies on an interplay between confidence in phonetic and lexical predictions and category typicality.

Download full-text PDF

Source
http://dx.doi.org/10.1159/000494929DOI Listing

Publication Analysis

Top Keywords

perceptual learning
44
maximally ambiguous
12
perceptual
11
learning
11
lexically guided
8
guided perceptual
8
listeners experiment
8
ambiguous stimuli
8
goldilocks zone
4
zone perceptual
4

Similar Publications

Background: Alterations in sensory perception, a core phenotype of autism, are attributed to imbalanced integration of sensory information and prior knowledge during perceptual statistical (Bayesian) inference. This hypothesis has gained momentum in recent years, partly because it can be implemented both at the computational level, as in Bayesian perception, and at the level of canonical neural microcircuitry, as in predictive coding. However, empirical investigations have yielded conflicting results with evidence remaining limited.

View Article and Find Full Text PDF

Listeners can use both lexical context (i.e., lexical knowledge activated by the word itself) and lexical predictions based on the content of a preceding sentence to adjust their phonetic categories to speaker idiosyncrasies.

View Article and Find Full Text PDF

The goal of the present investigation was to perform a registered replication of Jones and Macken's (1995b) study, which showed that the segregation of a sequence of sounds to distinct locations reduced the disruptive effect on serial recall. Thereby, it postulated an intriguing connection between auditory stream segregation and the cognitive mechanisms underlying the irrelevant speech effect. Specifically, it was found that a sequence of changing utterances was less disruptive in stereophonic presentation, allowing each auditory object (letters) to be allocated to a unique location (right ear, left ear, center), compared to when the same sounds were played monophonically.

View Article and Find Full Text PDF

Improved Consistency of Lung Nodule Categorization in CT Scans with Heterogeneous Slice Thickness by Deep Learning-Based 3D Super-Resolution.

Diagnostics (Basel)

December 2024

Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea.

: Accurate volumetric assessment of lung nodules is an essential element of low-dose lung cancer screening programs. Current guidance recommends applying specific thresholds to measured nodule volume to make the following clinical decisions. In reality, however, CT scans often have heterogeneous slice thickness which is known to adversely impact the accuracy of nodule volume assessment.

View Article and Find Full Text PDF

Generation of high-resolution MPRAGE-like images from 3D head MRI localizer (AutoAlign Head) images using a deep learning-based model.

Jpn J Radiol

January 2025

Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan.

Purpose: Magnetization prepared rapid gradient echo (MPRAGE) is a useful three-dimensional (3D) T1-weighted sequence, but is not a priority in routine brain examinations. We hypothesized that converting 3D MRI localizer (AutoAlign Head) images to MPRAGE-like images with deep learning (DL) would be beneficial for diagnosing and researching dementia and neurodegenerative diseases. We aimed to establish and evaluate a DL-based model for generating MPRAGE-like images from MRI localizers.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!