In order to obtain an effective speech communication in rooms it is advisable, besides reaching the full intelligibility of words, to minimize the effort paid by the listener in the recognition of the speech material. This twofold requirement is not easily described by the current room acoustic indicators, which are mainly concerned either with a subjective rating by means of word recognition scores or with using listeners' impressions of reported listening difficulties. In this work, the problem is tackled by introducing the concept of "listening efficiency," which is defined as a combination of the accuracy of intelligibility and of the effort spent on achieving this goal. This indicator is here developed, and an application of the former and of the "listening efficiency" is presented in the field of classroom acoustics. Listening tests with pupils and adults were performed and the subsequent statistical analyses indicated several interesting findings. In particular, listening efficiency is able to clearly discriminate between equal intelligibility scores obtained under different acoustical conditions, permitting room acoustics to be tailored for specific groups, such as children.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.3436563 | DOI Listing |
Psychol Aging
January 2025
Hearing Sciences-Scottish Section, School of Medicine, University of Nottingham.
While there is strong evidence that younger adults use contextual information to generate semantic predictions, findings from older adults are less clear. Age affects cognition in a variety of different ways that may impact prediction mechanisms; while the efficiency of memory systems and processing speed decrease, life experience leads to complementary increases in vocabulary size, real-world knowledge, and even inhibitory control. Using the visual world paradigm, we tested prediction in younger ( = 30, between 18 and 35 years of age) and older adults ( = 30, between 53 and 78 years of age).
View Article and Find Full Text PDFTrends Hear
January 2025
Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany.
Speech-on-speech masking is a common and challenging situation in everyday verbal communication. The ability to segregate competing auditory streams is a necessary requirement for focusing attention on the target speech. The Visual World Paradigm (VWP) provides insight into speech processing by capturing gaze fixations on visually presented icons that reflect the speech signal.
View Article and Find Full Text PDFJ Exp Psychol Hum Percept Perform
January 2025
Department of Psychology, Saarland University.
Task-irrelevant sounds that are semantically congruent with the target can facilitate performance in visual search tasks, resulting in faster search times. In three experiments, we tested the underlying processes of this effect. Participants were presented with auditory primes that were semantically congruent, neutral, or incongruent to the visual search target, and importantly, we varied the set size of the search displays.
View Article and Find Full Text PDFCommun Psychol
January 2025
Helmholtz Institute for Human-Centered AI, Münich, Germany.
Whether it is listening to a piece of music, learning a new language, or solving a mathematical equation, people often acquire abstract notions in the sense of motifs and variables-manifested in musical themes, grammatical categories, or mathematical symbols. How do we create abstract representations of sequences? Are these abstract representations useful for memory recall? In addition to learning transition probabilities, chunking, and tracking ordinal positions, we propose that humans also use abstractions to arrive at efficient representations of sequences. We propose and study two abstraction categories: projectional motifs and variable motifs.
View Article and Find Full Text PDFSci Adv
January 2025
Université Paris Cité, Institut Pasteur, AP-HP, Inserm, Fondation Pour l'Audition, Institut de l'Audition, IHU reConnect, F-75012 Paris, France.
The temporal structure of sensory inputs contains essential information for their interpretation. Sensory cortex represents these temporal cues through two codes: the temporal sequences of neuronal activity and the spatial patterns of neuronal firing rate. However, it is unknown which of these coexisting codes causally drives sensory decisions.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!