Elephants have a unique auditory system that is larger than any other terrestrial mammal. To quantify the impact of larger middle ear (ME) structures, we measured 3D ossicular motion and ME sound transmission in cadaveric temporal bones from both African and Asian elephants in response to air-conducted (AC) tonal pressure stimuli presented in the ear canal (PEC). Results were compared to similar measurements in humans.
View Article and Find Full Text PDFElephants have a unique auditory system that is larger than any other terrestrial mammal. To quantify the impact of larger middle ear (ME) structures, we measured 3D ossicular motion and ME sound transmission in cadaveric temporal bones from both African and Asian elephants in response to air-conducted (AC) tonal pressure stimuli presented in the ear canal (P ). Results were compared to similar measurements in humans.
View Article and Find Full Text PDFTextbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly.
View Article and Find Full Text PDFSelective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear.
View Article and Find Full Text PDFIn macaques, the middle lateral auditory cortex (ML) is a belt region adjacent to the primary auditory cortex (A1) and believed to be at a hierarchically higher level. Although ML single-unit responses have been studied for several auditory stimuli, the ability of ML cells to encode amplitude modulation (AM)-an ability that has been widely studied in A1-has not yet been characterized. Here, we compared the responses of A1 and ML neurons to amplitude-modulated (AM) noise in awake macaques.
View Article and Find Full Text PDFAn anatomically based three-dimensional finite-element human middle-ear (ME) model is used to test the sensitivity of ME sound transmission to tympanic-membrane (TM) material properties. The baseline properties produce responses comparable to published measurements of ear-canal input impedance and power reflectance, stapes velocity normalized by ear-canal pressure (P), and middle-ear pressure gain (MEG), i.e.
View Article and Find Full Text PDFSensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition.
View Article and Find Full Text PDFThe ability to segregate simultaneous sound sources based on their spatial locations is an important aspect of auditory scene analysis. While the role of sound azimuth in segregation is well studied, the contribution of sound elevation remains unknown. Although previous studies in humans suggest that elevation cues alone are not sufficient to segregate simultaneous broadband sources, the current study demonstrates they can suffice.
View Article and Find Full Text PDFWe recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation.
View Article and Find Full Text PDFWe recorded from middle-lateral (ML) and primary (A1) auditory cortex while macaques discriminated amplitude-modulated (AM) noise from unmodulated noise. Compared with A1, ML had a higher proportion of neurons that encoded increasing AM depth by decreasing their firing rates ("decreasing" neurons), particularly with responses that were not synchronized to the modulation. Choice probability (CP) analysis revealed that A1 and ML activity were different during the first half of the test stimulus.
View Article and Find Full Text PDFThe effect of attention on single neuron responses in the auditory system is unresolved. We found that when monkeys discriminated temporally amplitude modulated (AM) from unmodulated sounds, primary auditory cortical (A1) neurons better discriminated those sounds than when the monkeys were not discriminating them. This was observed for both average firing rate and vector strength (VS), a measure of how well neurons temporally follow the stimulus' temporal modulation.
View Article and Find Full Text PDFAmplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1).
View Article and Find Full Text PDFRecent evidence is reshaping the view of primary auditory cortex (A1) from a unisensory area to one more involved in dynamically integrating multisensory- and task-related information. We found A1 single- (SU) and multiple-unit (MU) activity correlated with macaques' choices in an amplitude modulation (AM) discrimination task. Animals were trained to discriminate AM noise from unmodulated noise by releasing a lever for AM noise and holding down the lever for unmodulated noise.
View Article and Find Full Text PDFPrevious observations show that humans outperform non-human primates on some temporally-based auditory discrimination tasks, suggesting there are species differences in the proficiency of auditory temporal processing among primates. To further resolve these differences we compared the abilities of rhesus macaques and humans to detect sine-amplitude modulation (AM) of a broad-band noise carrier as a function of both AM frequency (2.5 Hz-2 kHz) and signal duration (50-800 ms), under similar testing conditions.
View Article and Find Full Text PDFThe focus of most research on auditory cortical neurons has concerned the effects of rather simple stimuli, such as pure tones or broad-band noise, or the modulation of a single acoustic parameter. Extending these findings to feature coding in more complex stimuli such as natural sounds may be difficult, however. Generalizing results from the simple to more complex case may be complicated by non-linear interactions occurring between multiple, simultaneously varying acoustic parameters in complex sounds.
View Article and Find Full Text PDFConflicting results have led to different views about how temporal modulation is encoded in primary auditory cortex (A1). Some studies find a substantial population of neurons that change firing rate without synchronizing to temporal modulation, whereas other studies fail to see these nonsynchronized neurons. As a result, the role and scope of synchronized temporal and nonsynchronized rate codes in AM processing in A1 remains unresolved.
View Article and Find Full Text PDFElectromagnetic floating-mass transducers for implantable middle-ear hearing devices (IMEHDs) afford the advantages of a simple surgical implantation procedure and easy attachment to the ossicles. However, their shortcomings include susceptibility to interference from environmental electromagnetic fields, relatively high current consumption, and a limited ability to output high-frequency vibrations. To address these limitations, a piezoelectric floating-mass transducer (PFMT) has recently been developed.
View Article and Find Full Text PDFJ Acoust Soc Am
January 2008
Middle-ear circuit model parameters are selected to produce overall magnitude and phase agreement with pressure to stapes velocity transfer function measurements made on 16 human temporal bones, up to approximately 12 kHz. The circuit model, which was previously used for the cat, represents the tympanic membrane (TM) as a distributed parameter acoustic transmission line, and ossicular chain and cochlea as a network of lumped circuit elements. For some ears the TM transmission line primarily affects the magnitude of the response, while for others it primarily affects the phase.
View Article and Find Full Text PDFObjective: To investigate the significance of tympanic membrane collagen fiber layers in high frequency sound transmission.
Study Design: Human cadaver temporal bone study.
Methods: Laser Doppler vibrometry was used to measure stapes footplate movement in response to acoustic stimulation.
Experiments were conducted to evaluate a silicon accelerometer as an implantable sound sensor for implantable hearing aids. The main motivation of this study is to find an alternative sound sensor that is implantable inside the body, yet does not suffer from the signal attenuation from the body. The merit of the accelerometer sensor as a sound sensor will be that it will utilize the natural mechanical conduction in the middle ear as a source of the vibration.
View Article and Find Full Text PDFWhen interfering objects occlude a scene, the visual system restores the occluded information. Similarly, when a sound of interest (a "foreground" sound) is interrupted (occluded) by loud noise, the auditory system restores the occluded information. This process, called auditory induction, can be exploited to create a continuity illusion.
View Article and Find Full Text PDFDrive pressure to stapes velocity (V(st)) transfer function measurements are collected and compared for human cadaveric temporal bones with the drive pressure alternately on the ear canal (EC) and middle ear cavity (MEC) sides of the tympanic membrane (TM), in order to predict the performance of proposed middle-ear implantable acoustic hearing aids, as well as provide additional data for examining human middle ear mechanics. The chief finding is that, in terms of the V(st) response, MEC stimulation performs at least as well as EC stimulation below 8 kHz, provided that the EC is unplugged. Plugging the EC causes a reduced response for MEC drive below 2 kHz, due to a corresponding reduction of the pressure difference between the two sides of the TM.
View Article and Find Full Text PDFDespite the extensive physiological work performed on auditory cortex, our understanding of the basic functional properties of auditory cortical neurons is incomplete. For example, it remains unclear what stimulus features are most important for these cells. Determining these features is challenging given the considerable size of the relevant stimulus parameter space as well as the unpredictable nature of many neurons' responses to complex stimuli due to nonlinear integration across frequency.
View Article and Find Full Text PDFDespite dyslexia affecting a large number of people, the mechanisms underlying the disorder remain undetermined. There are numerous theories about the origins of dyslexia. Many of these relate dyslexia to low-level, sensory temporal processing deficits.
View Article and Find Full Text PDFIn most natural listening environments, noise occludes objects of interest, and it would be beneficial for an organism to correctly identify those objects. When a sound of interest ("foreground" sound) is interrupted by a loud noise, subjects perceive the entire sound, even if the noise was intense enough to completely mask a part of it. This phenomenon can be exploited to create an illusion: when a silent gap is introduced into the foreground and high-intensity noise is superimposed into the gap, subjects report the foreground as continuing through the noise although that portion of the foreground was deleted.
View Article and Find Full Text PDF