We only remember a fraction of what we see-including images that are highly memorable and those that we encounter during highly attentive states. However, most models of human memory disregard both an image's memorability and an individual's fluctuating attentional states. Here, we build the first model of memory synthesizing these two disparate factors to predict subsequent image recognition. We combine memorability scores of 1100 images (Experiment 1, n = 706) and attentional state indexed by response time on a continuous performance task (Experiments 2 and 3, n = 57 total). Image memorability and sustained attentional state explained significant variance in image memory, and a joint model of memory including both factors outperformed models including either factor alone. Furthermore, models including both factors successfully predicted memory in an out-of-sample group. Thus, building models based on individual- and image-specific factors allows for directed forecasting of our memories. SIGNIFICANCE STATEMENT: Although memory is a fundamental cognitive process, much of the time memory failures cannot be predicted until it is too late. However, in this study, we show that much of memory is surprisingly pre-determined ahead of time, by factors shared across the population and highly specific to each individual. Specifically, we build a new multidimensional model that predicts memory based just on the images a person sees and when they see them. This research synthesizes findings from disparate domains ranging from computer vision, attention, and memory into a predictive model. These findings have resounding implications for domains such as education, business, and marketing, where it is a top priority to predict (and even manipulate) what information people will remember.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.cognition.2022.105201 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!