Crossmodal benefits to vocal emotion perception in cochlear implant users.

iScience

Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany.

Published: December 2022

Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9791346PMC
http://dx.doi.org/10.1016/j.isci.2022.105711DOI Listing

Publication Analysis

Top Keywords

crossmodal benefits
8
vocal emotion
8
benefits vocal
4
emotion perception
4
perception cochlear
4
cochlear implant
4
implant users
4
users speech
4
speech comprehension
4
comprehension counts
4

Similar Publications

Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and infrared images. However, exclusively relying on high-level semantic information from the network's final layers can restrict shared feature representations and overlook the benefits of low-level details.

View Article and Find Full Text PDF

Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired OCT to confocal microscopy images.

View Article and Find Full Text PDF

Data-driven calibration methods have shown promising results for accurate proprioception in soft robotics. This process can be greatly benefited by adopting numerical simulation for computational efficiency. However, the gap between the simulated and real domains limits the accurate, generalized application of the approach.

View Article and Find Full Text PDF

A foundation model with weak experiential guidance in detecting muscle invasive bladder cancer on MRI.

Cancer Lett

January 2025

Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, 210029, PR China; The Affiliated Suqian First People's Hospital of Nanjing Medical University, Suqian, Jiangsu Province, PR China. Electronic address:

Preoperative detection of muscle-invasive bladder cancer (MIBC) remains a great challenge in practice. We aimed to develop and validate a deep Vesical Imaging Network (ViNet) model for the detection of MIBC using high-resolution Tweighted MR imaging (hrTWI) in a multicenter cohort. ViNet was designed using a modified 3D ResNet, in which, the encoder layers were pretrained using a self-supervised foundation model on over 40,000 cross-modal imaging datasets for transfer learning, and the classification modules were weakly supervised by an experiential knowledge-domain mask indicated by a nnUNet segmentation model.

View Article and Find Full Text PDF

Purpose: This study investigated the applicability of 3-dimensional dose predictions from a model trained on one modality to a cross-modality automated planning workflow. Additionally, we explore the impact of integrating a multicriteria optimizer (MCO) on adapting predictions to different clinical preferences.

Methods And Materials: Using a previously created 3-stage U-Net in-house model trained on the 2020 American Association of Physicists in Medicine OpenKBP challenge data set (340 head and neck plans, all planned using 9-field static intensity modulated radiation therapy [IMRT]), we retrospectively generated dose predictions for 20 patients.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!