Causality is a fundamental property of physical systems and dictates that a time impulse response characterizing any causal system must be one-sided. However, when synthesized using the inverse discrete Fourier transform (IDFT) of a corresponding band-limited numerical frequency transfer function, several papers have reported two-sided IDFT impulse responses of ear-canal reflectance and ear-probe source parameters. Judging from the literature on ear-canal reflectance, the significance and source of these seemingly non-physical negative-time components appear largely unclear.
View Article and Find Full Text PDFMicromixers are critical components in the lab-on-a-chip or micro total analysis systems technology found in micro-electro-mechanical systems. In general, the mixing performance of the micromixers is determined by characterising the mixing time of a system, for example the time or number of circulations and vibrations guided by tracers (i.e.
View Article and Find Full Text PDFJ Acoust Soc Am
December 2017
The goal of this study is to provide a metric for evaluating a given hearing-aid insertion gain using a consonant recognition based measure. The basic question addressed is how treatment impacts phone recognition at the token level, relative to a flat insertion gain, at the most-comfortable-level (MCL). These tests are directed at fine-tuning a treatment, with the ultimate goal of improving speech perception, and to identify when a hearing level gain-based treatment degrades phone recognition.
View Article and Find Full Text PDFJ Acoust Soc Am
March 2017
Consonant-vowel (CV) perception experiments provide valuable insights into how humans process speech. Here, two CV identification experiments were conducted in a group of hearing-impaired (HI) listeners, using 14 consonants followed by the vowel /ɑ/. The CVs were presented in quiet and with added speech-shaped noise at signal-to-noise ratios of 0, 6, and 12 dB.
View Article and Find Full Text PDFThis article reviews the development of metamaterials (MM), starting from Newton's discovery of the wave equation, and ends with a discussion of the need for a technical taxonomy (classification) of these materials, along with a better defined definition of metamaterials. It is intended to be a technical definition of metamaterials, based on a historical perspective. The evolution of MMs began with the discovery of the wave equation, traceable back to Newton's calculation of the speed of sound.
View Article and Find Full Text PDFThis note comments on the observations of Bernier et al. (2016) regarding errors in Appendix A of Kim and Allen (2013). We acknowledge that the equations in the Appendix are in error, but wish to point out that these equations were not actually used for our analysis.
View Article and Find Full Text PDFObjectives: Wideband acoustic immittance (WAI) measurements are capable of quantifying middle ear performance over a wide range of frequencies relevant to human hearing. Static pressure in the middle ear cavity affects sound transmission to the cochlea, but few datasets exist to quantify the relationship between middle ear transmission and the static pressure. In this study, WAI measurements of normal ears are analyzed in both negative middle ear pressure (NMEP) and ambient middle ear pressure (AMEP) conditions, with a focus on the effects of NMEP in individual ears.
View Article and Find Full Text PDFDevelopmental exposure to polychlorinated biphenyls (PCBs) causes auditory deficits. Thus, we recently conducted a study to investigate if developmental PCB exposure would exacerbate noise-induced hearing loss in adulthood. Unexpectedly, some PCB-exposed rats exhibited seizure-like behaviors when exposed to loud noise.
View Article and Find Full Text PDFJ Speech Lang Hear Res
December 2014
Purpose: A critical issue in assessing speech recognition involves understanding the factors that cause listeners to make errors. Models like the articulation index show that average error decreases logarithmically with increases in signal-to-noise ratio (SNR). The authors investigated (a) whether this log-linear relationship holds across consonants and for individual tokens and (b) what accounts for differences in error rates at the across- and within-consonant levels.
View Article and Find Full Text PDFObjectives: Distortion-product otoacoustic emissions (DPOAEs) collected after sound pressure level (SPL) calibration are susceptible to standing waves that affect measurements at the plane of the probe microphone due to overlap of incident and reflected waves. These standing-wave effects can be as large as 20 dB, and may affect frequencies both above and below 4 kHz. It has been shown that forward pressure level (FPL) calibration minimizes standing-wave effects by isolating the forward-propagating component of the stimulus.
View Article and Find Full Text PDFThe consonant recognition of 17 ears with sensorineural hearing loss is evaluated for 14 consonants /p, t, k, f, s, ∫, b, d, g, v, z, 3, m, n/+/a/, under four speech-weighted noise conditions (0, 6, 12 dB SNR, quiet). One male and one female talker were chosen for each consonant, resulting in 28 total consonant-vowel test tokens. For a given consonant, tokens by different talkers were observed to systematically differ, in both the robustness to noise and/or the resulting confusion groups.
View Article and Find Full Text PDFChildren with chronic otitis media (OM) often have conductive hearing loss which results in communication difficulties and requires surgical treatment. Recent studies have provided clinical evidence that there is a one-to-one correspondence between chronic OM and the presence of a bacterial biofilm behind the tympanic membrane (TM). Here we investigate the acoustic effects of bacterial biofilms, confirmed using optical coherence tomography (OCT), in adult ears.
View Article and Find Full Text PDFThis study characterizes middle ear complex acoustic reflectance (CAR) and impedance by fitting poles and zeros to real-ear measurements. The goal of this work is to establish a quantitative connection between pole-zero locations and the underlying physical properties of CAR data. Most previous studies have analyzed CAR magnitude; while the magnitude accounts for reflected power, it does not encode latency information.
View Article and Find Full Text PDFModels for acoustic transducers, such as loudspeakers, mastoid bone-drivers, hearing-aid receivers, etc., are critical elements in many acoustic applications. Acoustic transducers employ two-port models to convert between acoustic and electromagnetic signals.
View Article and Find Full Text PDFIn a previous study on plosives, the 3-Dimensional Deep Search (3DDS) method for the exploration of the necessary and sufficient cues for speech perception was introduced (Li et al., (2010). J.
View Article and Find Full Text PDFJ Acoust Soc Am
April 2012
Studies on consonant perception under noise conditions typically describe the average consonant error as exponential in the Articulation Index (AI). While this AI formula nicely fits the average error over all consonants, it does not fit the error for any consonant at the utterance level. This study analyzes the error patterns of six stop consonants /p, t, k, b, d, g/ with four vowels (/α/, /ε/, /I/, /ae/), at the individual consonant (i.
View Article and Find Full Text PDFA method is described for solving the inverse problem of determining the profile of an acoustic horn when time-domain reflectance (TDR) is known only at the entrance. The method involves recasting Webster's horn equation in terms of forward and backward propagating wave variables. An essential feature of this method is a requirement that the backward propagating wave be continuous at the wave-front at all locations beyond the entrance.
View Article and Find Full Text PDFIn the 1970-1980's, a number of papers explored the role of the transitional and burst features in consonant-vowel context. These papers left unresolved the relative importance of these two acoustic cues. This research takes advantage of refined signal processing methods, allowing for the visualization and modification of acoustic details.
View Article and Find Full Text PDFJ Speech Lang Hear Res
April 2012
Purpose: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, HT (f), grouping HI listeners with HT (f) is widely practiced. In this article, the relationship between consonant recognition and HT (f) is considered over a range of signal-to-noise ratios (SNRs).
Method: Confusion matrices (CMs) from 25 HI ears were generated in response to 16 consonant-vowel syllables presented at 6 different SNRs.
Synthetic speech has been widely used in the study of speech cues. A serious disadvantage of this method is that it requires prior knowledge about the cues to be identified in order to synthesize the speech. Incomplete or inaccurate hypotheses about the cues often lead to speech sounds of low quality.
View Article and Find Full Text PDFMiddle ear models have been successfully developed for many years. Most of those are implemented in the frequency domain, where physical equations are more easily derived. This is problematic, however, when it comes to model non-linear phenomena, especially in the cochlea, and because a frequency-domain implementation may be less intuitive.
View Article and Find Full Text PDFThis paper presents a compact graphical method for comparing the performance of individual hearing impaired (HI) listeners with that of an average normal hearing (NH) listener on a consonant-by-consonant basis. This representation, named the consonant loss profile (CLP), characterizes the effect of a listener's hearing loss on each consonant over a range of performance. The CLP shows that the consonant loss, which is the signal-to-noise ratio (SNR) difference at equal NH and HI scores, is consonant-dependent and varies with the score.
View Article and Find Full Text PDFThe multiband product rule, also known as band-independence, is a basic assumption of articulation index and its extension, the speech intelligibility index. Previously Fletcher showed its validity for a balanced mix of 20% consonant-vowel (CV), 20% vowel-consonant (VC), and 60% consonant-vowel-consonant (CVC) sounds. This study repeats Miller and Nicely's version of the hi-/lo-pass experiment with minor changes to study band-independence for the 16 Miller-Nicely consonants.
View Article and Find Full Text PDFQuantifying how the sound delivered to the ear canal relates to hearing threshold has historically relied on acoustic calibration in physical assemblies with an input impedance intended to match the human ear (e.g., a Zwislocki coupler).
View Article and Find Full Text PDFThe classic [MN55] confusion matrix experiment (16 consonants, white noise masker) was repeated by using computerized procedures, similar to those of Phatak and Allen (2007). ["Consonant and vowel confusions in speech-weighted noise," J. Acoust.
View Article and Find Full Text PDF