This study assessed the extent to which second-language learners are sensitive to phonetic information contained in visual cues when identifying a non-native phonemic contrast. In experiment 1, Spanish and Japanese learners of English were tested on their perception of a labial/ labiodental consonant contrast in audio (A), visual (V), and audio-visual (AV) modalities. Spanish students showed better performance overall, and much greater sensitivity to visual cues than Japanese students. Both learner groups achieved higher scores in the AV than in the A test condition, thus showing evidence of audio-visual benefit. Experiment 2 examined the perception of the less visually-salient /1/-/r/ contrast in Japanese and Korean learners of English. Korean learners obtained much higher scores in auditory and audio-visual conditions than in the visual condition, while Japanese learners generally performed poorly in both modalities. Neither group showed evidence of audio-visual benefit. These results show the impact of the language background of the learner and visual salience of the contrast on the use of visual cues for a non-native contrast. Significant correlations between scores in the auditory and visual conditions suggest that increasing auditory proficiency in identifying a non-native contrast is linked with an increasing proficiency in using visual cues to the contrast.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1121/1.2166611 | DOI Listing |
Neural Netw
January 2025
School of automotive studies, Tongji University, Shanghai 201804, China.
Integrating visual features has been proven effective for deep learning-based speech quality enhancement, particularly in highly noisy environments. However, these models may suffer from redundant information, resulting in performance deterioration when the signal-to-noise ratio (SNR) is relatively high. Real-world noisy scenarios typically exhibit widely varying noise levels.
View Article and Find Full Text PDFFront Robot AI
January 2025
IDLab, Ghent University-imec, Ghent, Belgium.
Smart cities deploy various sensors such as microphones and RGB cameras to collect data to improve the safety and comfort of the citizens. As data annotation is expensive, self-supervised methods such as contrastive learning are used to learn audio-visual representations for downstream tasks. Focusing on surveillance data, we investigate two common limitations of audio-visual contrastive learning: false negatives and the minimal sufficient information bottleneck.
View Article and Find Full Text PDFBrain Behav Immun
January 2025
Department of Biology, Neuroendocrinology and Human Biology Unit, Institute for Animal Cell- and Systems Biology, Faculty of Mathematics, Informatics and Natural Sciences, Universität Hamburg, D-22085 Hamburg, Germany. Electronic address:
This study investigated the neural correlates of perceiving visual contagion cues characteristic of respiratory infections through functional magnetic resonance imaging (fMRI). Sixty-two participants (32f/ 30 m; ∼25 years on average) watched short videos depicting either contagious or non-contagious everyday situations, while their brain activation was continuously measured. We further measured the release of secretory immunoglobulin A (sIgA) in saliva to examine the first-line defensive response of the mucosal immune system.
View Article and Find Full Text PDFInt Conf Indoor Position Indoor Navig
October 2024
Department of Computer Science & Engineering, University of California, Santa Cruz, Santa Cruz, USA.
Navigating unfamiliar environments can be challenging for visually impaired individuals due to difficulties in recognizing distant landmarks or visual cues. This work focuses on a particular form of wayfinding, specifically backtracking a previously taken path, which can be useful for blind pedestrians. We propose a hands-free indoor navigation solution using a smartphone without relying on pre-existing maps or external infrastructure.
View Article and Find Full Text PDFiScience
January 2025
Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
The recognition of conspecifics, animals of the same species, and keeping track of changes in the social environment is essential to all animals. While molecules, circuits, and brain regions that control social behaviors across species are studied in-depth, the neural mechanisms that enable the recognition of social cues are largely obscure. Recent evidence suggests that social cues across sensory modalities converge in a thalamic area conserved across vertebrates.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!