Publications by authors named "Johahn Leung"

Many behavioral measures of visual perception fluctuate continually in a rhythmic manner, reflecting the influence of endogenous brain oscillations, particularly theta (∼4-7 Hz) and alpha (∼8-12 Hz) rhythms [1-3]. However, it is unclear whether these oscillations are unique to vision or whether auditory performance also oscillates [4, 5]. Several studies report no oscillatory modulation in audition [6, 7], while those with positive findings suffer from confounds relating to neural entrainment [8-10].

View Article and Find Full Text PDF

Recent work from several groups has shown that perception of various visual attributes in human observers at a given moment is biased toward what was recently seen. This positive serial dependency is a kind of temporal averaging that exploits short-term correlations in visual scenes to reduce noise and stabilize perception. To date, this stabilizing "continuity field" has been demonstrated on stable visual attributes such as orientation and face identity, yet it would be counterproductive to apply it to dynamic attributes in which change sensitivity is needed.

View Article and Find Full Text PDF

We tested whether fast flicker can capture attention using eight flicker frequencies from 20-96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80, and 96 Hz. We first established flicker detection rates for each frequency.

View Article and Find Full Text PDF

A natural auditory scene often contains sound moving at varying velocities. Using a velocity contrast paradigm, we compared sensitivity to velocity changes between continuous and discontinuous trajectories. Subjects compared the velocities of two stimulus intervals that moved along a single trajectory, with and without a 1 second inter stimulus interval (ISI).

View Article and Find Full Text PDF

The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion.

View Article and Find Full Text PDF

The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation.

View Article and Find Full Text PDF

The ability to actively follow a moving auditory target with our heads remains unexplored even though it is a common behavioral response. Previous studies of auditory motion perception have focused on the condition where the subjects are passive. The current study examined head tracking behavior to a moving auditory target along a horizontal 100° arc in the frontal hemisphere, with velocities ranging from 20 to 110°/s.

View Article and Find Full Text PDF

Purpose: An ex vivo organotypic retinal explant model was developed to examine retinal survival mechanisms relevant to glaucoma mediated by the renin angiotensin system in the rodent eye.

Methods: Eyes from adult Sprague Dawley rats were enucleated immediately post-mortem and used to make four retinal explants per eye. Explants were treated either with irbesartan (10 µM), vehicle or angiotensin II (2 μM) for four days.

View Article and Find Full Text PDF

The present study examined the effects of spatial sound-source density and reverberation on the spatiotemporal window for audio-visual motion coherence. Three different acoustic stimuli were generated in Virtual Auditory Space: two acoustically "dry" stimuli via the measurement of anechoic head-related impulse responses recorded at either 1° or 5° spatial intervals (Experiment 1), and a reverberant stimulus rendered from binaural room impulse responses recorded at 5° intervals in situ in order to capture reverberant acoustics in addition to head-related cues (Experiment 2). A moving visual stimulus with invariant localization cues was generated by sequentially activating LED's along the same radial path as the virtual auditory motion.

View Article and Find Full Text PDF

Purpose: The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing.

Method: Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003).

View Article and Find Full Text PDF

Evidence that the auditory system contains specialised motion detectors is mixed. Many psychophysical studies confound speed cues with distance and duration cues and present sound sources that do not appear to move in external space. Here we use the 'discrimination contours' technique to probe the probabilistic combination of speed, distance and duration for stimuli moving in a horizontal arc around the listener in virtual auditory space.

View Article and Find Full Text PDF

"Representational Momentum" (RM) is a mislocalization of the endpoint of a moving target in the direction of motion. In vision, RM has been shown to increase with target velocity. In audition, however, the effect of target velocity is unclear.

View Article and Find Full Text PDF

Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time.

View Article and Find Full Text PDF

The aim of this research was to evaluate the ability to switch attention and selectively attend to relevant information in children (10-15 years) with persistent listening difficulties in noisy environments. A wide battery of clinical tests indicated that children with complaints of listening difficulties had otherwise normal hearing sensitivity and auditory processing skills. Here we show that these children are markedly slower to switch their attention compared to their age-matched peers.

View Article and Find Full Text PDF

Free-field source localization experiments with 30 source locations, symmetrically distributed in azimuth, elevation, and front-back location, were performed with periodic tones having different phase relationships among their components. Although the amplitude spectra were the same for these different kinds of stimuli, the tones with certain phase relationships were successfully localized while the tones with other phases led to large elevation errors and front-back reversals, normally growing with stimulus level. The results show that it is not enough to have a smooth, broadband, long-term signal spectrum for successful sagittal-plane localization.

View Article and Find Full Text PDF

We investigated audiovisual speed perception to test the maximum-likelihood-estimation (MLE) model of multisensory integration. According to MLE, audiovisual speed perception will be based on a weighted average of visual and auditory speed estimates, with each component weighted by its inverse variance, a statistically optimal combination that produces a fused estimate with minimised variance and thereby affords maximal discrimination. We use virtual auditory space to create ecologically valid auditory motion, together with visual apparent motion around an array of 63 LEDs.

View Article and Find Full Text PDF

Studies of spatial perception during visual saccades have demonstrated compressions of visual space around the saccade target. Here we psychophysically investigated perception of auditory space during rapid head turns, focusing on the "perisaccadic" interval. Using separate perceptual and behavioral response measures we show that spatial compression also occurs for rapid head movements, with the auditory spatial representation compressing by up to 50%.

View Article and Find Full Text PDF