Semantic processing of unattended speech in dichotic listening.

J Acoust Soc Am

Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom.

Published: August 2015

This study investigated whether unattended speech is processed at a semantic level in dichotic listening using a semantic priming paradigm. A lexical decision task was administered in which target words were presented in the attended auditory channel, preceded by two prime words presented simultaneously in the attended and unattended channels, respectively. Both attended and unattended primes were either semantically related or unrelated to the attended targets. Attended prime-target pairs were presented in isolation, whereas unattended primes were presented in the context of a series of rapidly presented words. The fundamental frequency of the attended stimuli was increased by 40 Hz relative to the unattended stimuli, and the unattended stimuli were attenuated by 12 dB [+12 dB signal-to-noise ratio (SNR)] or presented at the same intensity level as the attended stimuli (0 dB SNR). The results revealed robust semantic priming of attended targets by attended primes at both the +12 and 0 dB SNRs. However, semantic priming by unattended primes emerged only at the 0 dB SNR. These findings suggest that the semantic processing of unattended speech in dichotic listening depends critically on the relative intensities of the attended and competing signals.

Download full-text PDF

Source
http://dx.doi.org/10.1121/1.4927410DOI Listing

Publication Analysis

Top Keywords

unattended speech
12
dichotic listening
12
semantic priming
12
unattended primes
12
attended
10
unattended
9
semantic processing
8
processing unattended
8
speech dichotic
8
attended unattended
8

Similar Publications

Linguistic Processing of Unattended Speech Under a Cocktail Party Listening Scenario.

J Speech Lang Hear Res

January 2025

School of Psychological and Cognitive Sciences, Peking University, Beijing, China.

Article Synopsis
  • This study explores how different levels of linguistic structure (syllables, words, sentences) in background speech affect the ability to recognize target speech in noisy situations, like a cocktail party.
  • Thirty-six participants were tested on their recognition of target speech while it was masked by competing speech, with the complexity and spatial location of the background speech varied.
  • Results showed that more complex linguistic background (like sentences) caused greater interference in recognizing target speech, confirming that both linguistic and spatial factors play a role in how we process speech in challenging listening environments.
View Article and Find Full Text PDF

Objective: Adolescents with acquired brain injuries are at risk for additional injuries after hospital discharge. We asked healthcare providers to identify and prioritise urgent hazards in the home setting for this population.

Methods: We used a convergent mixed methods approach.

View Article and Find Full Text PDF
Article Synopsis
  • Cochlear implants help restore speech understanding in people with severe hearing loss, but how users perceive sounds compared to normal hearing is still unclear.
  • A study examined the brain's response to speech sounds (phoneme-related potentials) in both cochlear implant users and normal hearing individuals, focusing on attention effects.
  • Results showed similar early responses in both groups, but cochlear implant users had reduced activity for later responses, suggesting potential areas for improving speech assessment and tailored rehabilitation strategies.
View Article and Find Full Text PDF

Deep-learning models reveal how context and listener attention shape electrophysiological correlates of speech-to-language transformation.

PLoS Comput Biol

November 2024

Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, New York, United States of America.

Article Synopsis
  • The human brain transforms continuous speech into words by interpreting various factors like intonation and accents, and this process can be modeled using EEG recordings.
  • Contemporary models tend to overlook how sounds are categorized in the brain, limiting our understanding of speech processing.
  • The study finds that deep-learning systems like Whisper improve EEG modeling of speech comprehension by incorporating context and demonstrating that linguistic structure is crucial for accurate brain function representation, especially in complex listening environments.
View Article and Find Full Text PDF

Native language advantage in electrical brain responses to speech sound changes in passive and active listening condition.

Neuropsychologia

August 2024

Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyväskylä, Jyväskylä, Finland. Electronic address:

Article Synopsis
  • * Finnish and Chinese participants displayed different ERP amplitude responses, indicating that native language affects sensitivity to speech sounds, particularly in passive listening conditions.
  • * Results suggest that while native speakers are more sensitive to changes in their native language, both native and non-native speakers can effectively detect changes when paying attention.
View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!