In the primary school classroom, children are exposed to multiple factors that combine to create adverse conditions for listening to and understanding what the teacher is saying. Despite the ubiquity of these conditions, there is little knowledge concerning the way in which various factors combine to influence listening comprehension and the effortfulness of listening. The aim of the present study was to investigate the combined effects of background noise, voice quality, and visual cues on children's listening comprehension and effort. To achieve this aim, we performed a set of four well-controlled, yet ecologically valid, experiments with 245 eight-year-old participants. Classroom listening conditions were simulated using a digitally animated talker with a dysphonic (hoarse) voice and background babble noise composed of several children talking. Results show that even low levels of babble noise interfere with listening comprehension, and there was some evidence that this effect was reduced by seeing the talker's face. Dysphonia did not significantly reduce listening comprehension scores, but it was considered unpleasant and made listening seem difficult, probably by reducing motivation to listen. We found some evidence that listening comprehension performance under adverse conditions is positively associated with individual differences in executive function. Overall, these results suggest that multiple factors combine to influence listening comprehension and effort for child listeners in the primary school classroom. The constellation of these room, talker, modality, and listener factors should be taken into account in the planning and design of educational and learning activities.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6052349 | PMC |
http://dx.doi.org/10.3389/fpsyg.2018.01193 | DOI Listing |
PLoS Biol
January 2025
Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.
View Article and Find Full Text PDFElife
January 2025
State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, China.
Speech comprehension involves the dynamic interplay of multiple cognitive processes, from basic sound perception, to linguistic encoding, and finally to complex semantic-conceptual interpretations. How the brain handles the diverse streams of information processing remains poorly understood. Applying Hidden Markov Modeling to fMRI data obtained during spoken narrative comprehension, we reveal that the whole brain networks predominantly oscillate within a tripartite latent state space.
View Article and Find Full Text PDFChild Neuropsychol
January 2025
Luxembourg Centre for Educational Testing, University of Luxembourg, Esch-sur-Alzette, Luxembourg.
Previous research estimated a prevalence of 3.4% Cerebral Visual Impairment (CVI)-related visual problems within primary school children, potentially compromising students' performance. This study aimed to clarify how CVI-related visual difficulties relate to academic performance in standardized achievement tests.
View Article and Find Full Text PDFAdv Med Educ Pract
January 2025
College of Health Sciences, University of Buraimi, Buraimi Governorate, Oman.
Introduction: Learning style denotes a learner's approach to acquiring, processing, interpreting, organizing, and contemplating information. VARK, formulated by Fleming and Mills (1992), assesses learning styles: Visual (V), Aural (A), Reading/Writing (R), and Kinesthetic (K). Visual learners prefer observing; Aural learners favor listening to lectures; Reading/Writing learners engage through texts and notes; Kinesthetic learners benefit from tactile activities.
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
January 2025
Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
This study aims to provide a comprehensive picture of auditory emotion perception in cochlear implant (CI) users by (1) investigating emotion categorization in both vocal (pseud-ospeech) and musical domains, and (2) how individual differences in residual acoustic hearing, sensitivity to voice cues (voice pitch, vocal tract length), and quality of life (QoL) might be associated with vocal emotion perception, and, going a step further, also with musical emotion perception. In 28 adult CI users, with or without self-reported acoustic hearing, we showed that sensitivity (d') scores for emotion categorization varied largely across the participants, in line with previous research. However, within participants, the d' scores for vocal and musical emotion categorization were significantly correlated, indicating similar processing of auditory emotional cues across the pseudo-speech and music domains and robustness of the tests.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!