Predicting image memorability from evoked feelings.

Behav Res Methods

Department of Psychology, Columbia University, New York, NY, USA.

Published: January 2025

While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the "meaningfulness" of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population. Here, we hypothesize that the feelings evoked by an image, operationalized as the valence and arousal dimensions of affect, significantly contribute to the memorability of scene images. We ran two complementary experiments to investigate the influence of affect on scene memorability, in the process creating a new image set (VAMOS) of hundreds of natural scene images for which we obtained valence, arousal, and memorability scores. From our first experiment, we found memorability to be highly reliable for scene images that span a wide range of evoked arousal and valence. From our second experiment, we found that both valence and arousal are significant but weak predictors of image memorability. Scene images were most memorable if they were slightly negatively valenced and highly arousing. Images that were extremely positive or unarousing were most forgettable. Valence and arousal together accounted for less than 8% of the variance in image memorability. These findings suggest that evoked affect contributes to the overall memorability of a scene image but, like other singular predictors, does not fully explain it. Instead, memorability is best explained by an assemblage of visual features that combine, in perhaps unintuitive ways, to predict what is likely to stick in our memory.

Download full-text PDF

Source
http://dx.doi.org/10.3758/s13428-024-02510-4DOI Listing

Publication Analysis

Top Keywords

valence arousal
16
scene images
16
image memorability
12
memorability scene
12
memorability
10
visual stimulus
8
scene
6
image
5
valence
5
arousal
5

Similar Publications

Emotional experiences involve dynamic multisensory perception, yet most EEG research uses unimodal stimuli such as naturalistic scene photographs. Recent research suggests that realistic emotional videos reliably reduce the amplitude of a steady-state visual evoked potential (ssVEP) elicited by a flickering border. Here, we examine the extent to which this video-ssVEP measure compares with the well-established Late Positive Potential (LPP) that is reliably larger for emotional relative to neutral scenes.

View Article and Find Full Text PDF

Physiological Responses to Aversive and Non-aversive Audiovisual, Audio, and Visual Stimuli.

Biol Psychol

January 2025

Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC 29201, USA. Electronic address:

We examined differences in physiological responses to aversive and non-aversive naturalistic audiovisual stimuli and their auditory and visual components within the same experiment. We recorded five physiological measures that have been shown to be sensitive to affect: electrocardiogram, electromyography (EMG) for zygomaticus major and corrugator supercilii muscles, electrodermal activity (EDA), and skin temperature. Valence and arousal ratings confirmed that aversive stimuli were more negative in valence and higher in arousal than non-aversive stimuli.

View Article and Find Full Text PDF

In short-term ordered recall tasks, phonological similarity impedes item and order recall, while semantic similarity benefits item recall with a weak or null effect on order recall. Ishiguro and Saito recently suggested that these contradictory findings were due to an inadequate assessment of semantic similarity. They proposed a novel measure of semantic similarity based on the distance between items in a three-dimensional space composed of the semantic dimensions of valence, arousal, and dominance.

View Article and Find Full Text PDF

Emotion recognition is an advanced technology for understanding human behavior and psychological states, with extensive applications for mental health monitoring, human-computer interaction, and affective computing. Based on electroencephalography (EEG), the biomedical signals naturally generated by the brain, this work proposes a resource-efficient multi-entropy fusion method for classifying emotional states. First, Discrete Wavelet Transform (DWT) is applied to extract five brain rhythms, i.

View Article and Find Full Text PDF

Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive the world in the absence of active physiology, deliberative thought, or any form of feedback that resembles human affective experience offer tools to demystify the relationship between seeing and feeling, and to assess how much of visually evoked affective experiences may be a straightforward function of representation learning over natural image statistics. In this work, we deploy a diverse sample of 180 state-of-the-art deep neural network models trained only on canonical computer vision tasks to predict human ratings of arousal, valence, and beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!