Objective: This study investigated the possible impact of simulated hearing loss on speech perception in Spanish-English bilingual children. To avoid confound between individual differences in hearing-loss configuration and linguistic experience, threshold-elevating noise simulating a mild-to-moderate sloping hearing loss was used with normal-hearing listeners. The hypotheses were that: (1) bilingual children can perform similarly to English-speaking monolingual peers in quiet; (2) for both bilingual and monolingual children, noise and simulated hearing loss would have detrimental impacts consistent with their acoustic characteristics (i.
View Article and Find Full Text PDFA multi-category psychometric function (MCPF) is introduced for modeling the stimulus-level dependence of perceptual categorical probability distributions. The MCPF is described in the context of individual-listener categorical loudness scaling (CLS) data. During a CLS task, listeners select the loudness category that best corresponds to their perception of the presented stimulus.
View Article and Find Full Text PDFLoudness is a suprathreshold percept that provides insight into the status of the entire auditory pathway. Individuals with matched thresholds can show individual variability in their loudness perception that is currently not well understood. As a means to analyze and model listener variability, we introduce the multi-category psychometric function (MCPF), a novel representation for categorical data that fully describes the probabilistic relationship between stimulus level and categorical-loudness perception.
View Article and Find Full Text PDFThis study describes procedures for constructing equal-loudness contours (ELCs) in units of phons from categorical loudness scaling (CLS) data and characterizes the impact of hearing loss on these estimates of loudness. Additionally, this study developed a metric, level-dependent loudness loss, which uses CLS data to specify the deviation from normal loudness perception at various loudness levels and as function of frequency for an individual listener with hearing loss. CLS measurements were made in 87 participants with hearing loss and 61 participants with normal hearing.
View Article and Find Full Text PDFThe consonant recognition of 17 ears with sensorineural hearing loss is evaluated for 14 consonants /p, t, k, f, s, ∫, b, d, g, v, z, 3, m, n/+/a/, under four speech-weighted noise conditions (0, 6, 12 dB SNR, quiet). One male and one female talker were chosen for each consonant, resulting in 28 total consonant-vowel test tokens. For a given consonant, tokens by different talkers were observed to systematically differ, in both the robustness to noise and/or the resulting confusion groups.
View Article and Find Full Text PDFIn a previous study on plosives, the 3-Dimensional Deep Search (3DDS) method for the exploration of the necessary and sufficient cues for speech perception was introduced (Li et al., (2010). J.
View Article and Find Full Text PDFJ Comput Neurosci
August 2010
In this paper, we develop a dynamical point process model for how complex sounds are represented by neural spiking in auditory nerve fibers. Although many models have been proposed, our point process model is the first to capture elements of spontaneous rate, refractory effects, frequency selectivity, phase locking at low frequencies, and short-term adaptation, all within a compact parametric approach. Using a generalized linear model for the point process conditional intensity, driven by extrinsic covariates, previous spiking, and an input-dependent charging/discharging capacitor model, our approach robustly captures the aforementioned features on datasets taken at the auditory nerve of chinchilla in response to speech inputs.
View Article and Find Full Text PDF