Little is known about how listeners represent another person's spatial perspective during language processing (e.g., two people looking at a map from different angles). Can listeners use contextual cues such as speaker identity to access a representation of the interlocutor's spatial perspective? In two eye-tracking experiments, participants received auditory instructions to move objects around a screen from two randomly alternating spatial perspectives (45° vs. 315° or 135° vs. 225° rotations from the participant's viewpoint). Instructions were spoken either by one voice, where the speaker's perspective switched at random, or by two voices, where each speaker maintained one perspective. Analysis of participant eye-gaze showed that interpretation of the instructions improved when each viewpoint was associated with a different voice. These findings demonstrate that listeners can learn mappings between individual talkers and viewpoints, and use these mappings to guide online language processing.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2015.11.011 | DOI Listing |
World Neurosurg
January 2025
School of Education, University of California, Irvine, USA.
Background: Medical professionals with both M.D. and Ph.
View Article and Find Full Text PDFSci Rep
January 2025
Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, La Laguna, Spain.
This study investigated how exposure to Caucasian and Chinese faces influences native Mandarin-Chinese speakers' learning of emotional meanings for English L2 words. Participants were presented with English pseudowords repeatedly paired with either Caucasian faces or Chinese faces showing emotions of disgust, sadness, or neutrality as a control baseline. Participants' learning was evaluated through both within-modality (i.
View Article and Find Full Text PDFNetwork
December 2024
Department of Electronics and Communication Engineering, Dronacharya Group of Institutions, Greater Noida, UP, India.
Speaker verification in text-dependent scenarios is critical for high-security applications but faces challenges such as voice quality variations, linguistic diversity, and gender-related pitch differences, which affect authentication accuracy. This paper introduces a Gender-Aware Siamese-Triplet Network-Deep Neural Network (ST-DNN) architecture to address these challenges. The Gender-Aware Network utilizes Convolutional 2D layers with ReLU activation for initial feature extraction, followed by multi-fusion dense skip connections and batch normalization to integrate features across different depths, enhancing discrimination between male and female speakers.
View Article and Find Full Text PDFQ J Exp Psychol (Hove)
December 2024
University of Lincoln, School of Psychology, College of Health and Science, Lincoln, United Kingdom.
There is evidence that congenitally blind individuals possess superior auditory perceptual skills compared to sighted people. However, relatively little is known about the auditory-specific cortical correlates of spatial attention in the blind and how task-irrelevant emotional stimulus features could further modulate such neural processes. This study tested blind and sighted participants in a challenging auditory discrimination task.
View Article and Find Full Text PDFJ Particip Med
December 2024
Mayo Clinic, Jacksonville, FL, United States.
Linguistic accommodation refers to the process of adjusting one's language, speech, or communication style to match or adapt to that of others in a social interaction. It is known to be vital to effective health communication. Despite this evidence, there is little scientific guidance on how to design linguistically adapted health behavior interventions for diverse English-speaking populations.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!