Thanks to large-scale labeled training data, deep neural networks (DNNs) have obtained remarkable success in many vision and multimedia tasks. However, because of the presence of domain shift, the learned knowledge of the well-trained DNNs cannot be well generalized to new domains or datasets that have few labels. Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain. In this article, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification. Specifically, we design a novel end-to-end cycle-consistent adversarial model, called CycleEmotionGAN++. First, we generate an adapted domain to align the source and target domains on the pixel level by improving CycleGAN with a multiscale structured cycle-consistency loss. During the image translation, we propose a dynamic emotional semantic consistency loss to preserve the emotion labels of the source images. Second, we train a transferable task classifier on the adapted domain with feature-level alignment between the adapted and target domains. We conduct extensive UDA experiments on the Flickr-LDL and Twitter-LDL datasets for distribution learning and ArtPhoto and Flickr and Instagram datasets for emotion classification. The results demonstrate the significant improvements yielded by the proposed CycleEmotionGAN++ compared to state-of-the-art UDA approaches.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2021.3062750 | DOI Listing |
Proc Biol Sci
January 2025
Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy.
Perceptual adaptation has been widely used to infer the existence of numerosity detectors, enabling animals to quickly estimate the number of objects in a scene. Here, we investigated, in humans, whether numerosity adaptation is influenced by stimulus feature changes as previous research suggested that adaptation is reduced when the colour of adapting and test stimuli did not match. We tested whether such adaptation reduction is due to unspecific novelty effects or changes of stimuli identity.
View Article and Find Full Text PDFCurr Nutr Rep
January 2025
School of Medical, Indigenous and Health Sciences, Faculty of Science, Medicine and Health, University of Wollongong NSW, Wollongong, 2522, Australia.
Purpose Of The Review: Clinical trials suggest that dietary anthocyanins may enhance cognitive function. This systematic literature review and meta-analysis aimed to identify the effect of anthocyanin on cognition and mood in adults.
Recent Findings: Using a random-effects model, Hedge's g scores were calculated to estimate the effect size.
JMIR Cancer
January 2025
Faculty of Industrial Design Engineering, Delft University of Technology, Delft, Netherlands.
Background: The rising number of cancer survivors and the shortage of health care professionals challenge the accessibility of cancer care. Health technologies are necessary for sustaining optimal patient journeys. To understand individuals' daily lives during their patient journey, qualitative studies are crucial.
View Article and Find Full Text PDFActa Ophthalmol
January 2025
Neuropsychiatric Epidemiology Unit, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
Purpose: To explore the potential correlation between subjective and measured visual function, as well as to analyse the influence of eye disease, socioeconomic factors and emotional dimensions.
Methods: Semi-structured interviews, physical examinations and functional tests (n = 1203). Demographics covered sex, marital status, education, household economy, smoking and alcohol.
Br J Psychol
January 2025
Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China.
How to raise donations effectively, especially in the E-era, has puzzled fundraisers and scientists across various disciplines. Our research focuses on donation-based crowdfunding projects and investigates how the emotional valence expressed verbally (in textual descriptions) and visually (in facial images) in project descriptions affects project performance. Study 1 uses field data (N = 3817), grabs project information and descriptions from a top donation-based crowdfunding platform, computes visual and verbal emotional valence using a deep-learning-based affective computing method and analyses how multimodal emotional valence influences donation outcomes.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!