Publications by authors named "Thomas Koelewijn"

Purpose: For voice perception, two voice cues, the fundamental frequency () and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top-down compensatory mechanisms such as from use of linguistic content.

View Article and Find Full Text PDF

Objectives: Understanding speech in real life can be challenging and effortful, such as in multiple-talker listening conditions. Fundamental frequency ( fo ) and vocal-tract length ( vtl ) voice cues can help listeners segregate between talkers, enhancing speech perception in adverse listening conditions. Previous research showed lower sensitivity to fo and vtl voice cues when speech signal was degraded, such as in cochlear implant hearing and vocoder-listening compared to normal hearing, likely contributing to difficulties in understanding speech in adverse listening.

View Article and Find Full Text PDF

Acoustics research involving human participants typically takes place in specialized laboratory settings. Listening studies, for example, may present controlled sounds using calibrated transducers in sound-attenuating or anechoic chambers. In contrast, remote testing takes place outside of the laboratory in everyday settings (e.

View Article and Find Full Text PDF

Perceptual differences in voice cues, such as fundamental frequency (F0) and vocal tract length (VTL), can facilitate speech understanding in challenging conditions. Yet, we hypothesized that in the presence of spectrotemporal signal degradations, as imposed by cochlear implants (CIs) and vocoders, acoustic cues that overlap for voice perception and phonemic categorization could be mistaken for one another, leading to a strong interaction between linguistic and indexical (talker-specific) content. Fifteen normal-hearing participants performed an odd-one-out adaptive task measuring just-noticeable differences (JNDs) in F0 and VTL.

View Article and Find Full Text PDF

Recently we showed that higher reward results in increased pupil dilation during listening (listening effort). Remarkably, this effect was not accompanied with improved speech reception. Still, increased listening effort may reflect more in-depth processing, potentially resulting in a better memory representation of speech.

View Article and Find Full Text PDF

Previous research has shown the effects of task demands on pupil responses in both normal hearing (NH) and hearing impaired (HI) adults. One consistent finding is that HI listeners have smaller pupil dilations at low levels of speech recognition performance (≤50%). This study aimed to examine the pupil dilation in adults with a normal pure-tone audiogram who experience serious difficulties when processing speech-in-noise.

View Article and Find Full Text PDF

Within the field of hearing science, pupillometry is a widely used method for quantifying listening effort. Its use in research is growing exponentially, and many labs are (considering) applying pupillometry for the first time. Hence, there is a growing need for a methods paper on pupillometry covering topics spanning from experiment logistics and timing to data cleaning and what parameters to analyze.

View Article and Find Full Text PDF

The measurement of cognitive resource allocation during listening, or listening effort, provides valuable insight in the factors influencing auditory processing. In recent years, many studies inside and outside the field of hearing science have measured the pupil response evoked by auditory stimuli. The aim of the current review was to provide an exhaustive overview of these studies.

View Article and Find Full Text PDF

In recent years, the fields of Audiology and Cognitive Sciences have seen a burgeoning of research focusing on the assessment of the effort required during listening. Among approaches to this question, the pupil dilation response has shown to be an informative nonvolitional indicator of cognitive processing during listening. Currently, pupillometry is applied in laboratories throughout the world to assess how listening effort is influenced by various relevant factors, such as hearing loss, signal processing algorithms, cochlear implant rehabilitation, cognitive abilities, language competency, and daily-life hearing disability.

View Article and Find Full Text PDF

Listening to speech in noise can be effortful but when motivated people seem to be more persevering. Previous research showed effects of monetary reward on autonomic responses like cardiovascular reactivity and pupil dilation while participants processed auditory information. The current study examined the effects of monetary reward on the processing of speech in noise and related listening effort as reflected by the pupil dilation response.

View Article and Find Full Text PDF

Difficulties arising in everyday speech communication often result from the acoustical environment, which may contain interfering background noise or competing speakers. Thus, listening and understanding speech in noise can be exhausting. Two experiments are presented in the current study that further explored the impact of masker type and Signal-to-Noise Ratio (SNR) on listening effort by means of pupillometry.

View Article and Find Full Text PDF

For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown.

View Article and Find Full Text PDF

Recent studies have shown that prior knowledge about where, when, and who is going to talk improves speech intelligibility. How related attentional processes affect cognitive processing load has not been investigated yet. In the current study, three experiments investigated how the pupil dilation response is affected by prior knowledge of target speech location, target speech onset, and who is going to talk.

View Article and Find Full Text PDF

Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios.

View Article and Find Full Text PDF

A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping.

View Article and Find Full Text PDF

The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers.

View Article and Find Full Text PDF

It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility.

View Article and Find Full Text PDF

Objectives: Recent research has demonstrated that pupil dilation, a measure of mental effort (cognitive processing load), is sensitive to differences in speech intelligibility. The present study extends this outcome by examining the effects of masker type and age on the speech reception threshold (SRT) and mental effort.

Design: In young and middle-aged adults, pupil dilation was measured while they performed an SRT task, in which spoken sentences were presented in stationary noise, fluctuating noise, or together with a single-talker masker.

View Article and Find Full Text PDF

Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes.

View Article and Find Full Text PDF

It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued.

View Article and Find Full Text PDF

There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is interaction between exogenous auditory and visual capture.

View Article and Find Full Text PDF

Lateralized magnetic fields were recorded from 12 subjects using a 151 channel magnetoencephalography (MEG) system to investigate temporal and functional properties of motor activation to the observation of goal-directed hand movements by a virtual actor. Observation of left and right hand movements generated a neuromagnetic lateralized readiness field (LRF) over contralateral motor cortex. The early onset of the LRF and the fact that the evoked component was insensitive to the correctness of the observed action suggest the operation of a fast and automatic form of motor resonance that may precede higher levels of action understanding.

View Article and Find Full Text PDF

Participants performed an attentional blink (AB) task including digits as targets and letters as distractors within the visual and auditory domains. Prior to the rapid serial visual presentation, a visual or auditory prime was presented in the form of a digit that was identical to the second target (T2) on 50% of the trials. In addition to the "classic" AB effect, an overall drop in performance on T2 was observed for the trials on which the stream was preceded by an identical prime from the same modality.

View Article and Find Full Text PDF

Recent research has demonstrated that cortical motor areas are engaged when observing motor actions of others. However, little is known about the possible contribution of the motor system for evaluating the correctness of others' actions. To address this question we designed an MEG experiment in which subjects were executing and observing motor actions with and without errors.

View Article and Find Full Text PDF

The second of two targets is often missed when presented shortly after the first target--a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing.

View Article and Find Full Text PDF