Publications by authors named "Christopher W Bishop"

Background: Glioblastoma (GBM) displays alterations in iron that drive proliferation and tumor growth. Iron regulation is complex and involves many regulatory mechanisms, including the homeostatic iron regulator () gene, which encodes the homeostatic iron regulatory protein. While is upregulated in GBM and correlates with poor survival outcomes, the function of HFE in GBM remains unclear.

View Article and Find Full Text PDF

Spatial perception in echoic environments is influenced by recent acoustic history. For instance, echo suppression becomes more effective or "builds up" with repeated exposure to echoes having a consistent acoustic relationship to a temporally leading sound. Four experiments were conducted to investigate how buildup is affected by prior exposure to unpaired lead-alone or lag-alone click trains.

View Article and Find Full Text PDF

Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls.

View Article and Find Full Text PDF

The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual "stream," such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.

View Article and Find Full Text PDF

Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference.

View Article and Find Full Text PDF

Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors.

View Article and Find Full Text PDF

Background: Segregating auditory scenes into distinct objects or streams is one of our brain's greatest perceptual challenges. Streaming has classically been studied with bistable sound stimuli, perceived alternately as a single group or two separate groups. Throughout the last decade different methodologies have yielded inconsistent evidence about the role of auditory cortex in the maintenance of streams.

View Article and Find Full Text PDF

Objective: To reduce stimulus transduction artifacts in EEG while using insert earphones.

Design: Reference Equivalent Threshold SPLs were assessed for Etymotic ER-4B earphones in 15 volunteers. Auditory brainstem responses (ABRs) and middle latency responses (MLRs)-as well as long-duration complex ABRs-to click and /dα/ speech stimuli were recorded in a single-case design.

View Article and Find Full Text PDF

Locating sounds in realistic scenes is challenging because of distracting echoes and coarse spatial acoustic estimates. Fortunately, listeners can improve performance through several compensatory mechanisms. For instance, their brains perceptually suppress short latency (1-10 ms) echoes by constructing a representation of the acoustic environment in a process called the precedence effect.

View Article and Find Full Text PDF

The brain uses context and prior knowledge to repair degraded sensory inputs and improve perception. For example, listeners hear speech continuing uninterrupted through brief noises, even if the speech signal is artificially removed from the noisy epochs. In a functional MRI study, we show that this temporal filling-in process is based on two dissociable neural mechanisms: the subjective experience of illusory continuity, and the sensory repair mechanisms that support it.

View Article and Find Full Text PDF

In noisy environments, listeners tend to hear a speaker's voice yet struggle to understand what is said. The most effective way to improve intelligibility in such conditions is to watch the speaker's mouth movements. Here we identify the neural networks that distinguish understanding from merely hearing speech, and determine how the brain applies visual information to improve intelligibility.

View Article and Find Full Text PDF