Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9791346 | PMC |
http://dx.doi.org/10.1016/j.isci.2022.105711 | DOI Listing |
Sensors (Basel)
January 2025
Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China.
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and infrared images. However, exclusively relying on high-level semantic information from the network's final layers can restrict shared feature representations and overlook the benefits of low-level details.
View Article and Find Full Text PDFBiol Imaging
December 2024
Visual Information Laboratory, University of Bristol, Bristol, UK.
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired OCT to confocal microscopy images.
View Article and Find Full Text PDFSoft Robot
January 2025
Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Republic of Korea.
Data-driven calibration methods have shown promising results for accurate proprioception in soft robotics. This process can be greatly benefited by adopting numerical simulation for computational efficiency. However, the gap between the simulated and real domains limits the accurate, generalized application of the approach.
View Article and Find Full Text PDFCancer Lett
January 2025
Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, 210029, PR China; The Affiliated Suqian First People's Hospital of Nanjing Medical University, Suqian, Jiangsu Province, PR China. Electronic address:
Preoperative detection of muscle-invasive bladder cancer (MIBC) remains a great challenge in practice. We aimed to develop and validate a deep Vesical Imaging Network (ViNet) model for the detection of MIBC using high-resolution Tweighted MR imaging (hrTWI) in a multicenter cohort. ViNet was designed using a modified 3D ResNet, in which, the encoder layers were pretrained using a self-supervised foundation model on over 40,000 cross-modal imaging datasets for transfer learning, and the classification modules were weakly supervised by an experiential knowledge-domain mask indicated by a nnUNet segmentation model.
View Article and Find Full Text PDFAdv Radiat Oncol
December 2024
Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina.
Purpose: This study investigated the applicability of 3-dimensional dose predictions from a model trained on one modality to a cross-modality automated planning workflow. Additionally, we explore the impact of integrating a multicriteria optimizer (MCO) on adapting predictions to different clinical preferences.
Methods And Materials: Using a previously created 3-stage U-Net in-house model trained on the 2020 American Association of Physicists in Medicine OpenKBP challenge data set (340 head and neck plans, all planned using 9-field static intensity modulated radiation therapy [IMRT]), we retrospectively generated dose predictions for 20 patients.
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!