AI Article Synopsis

  • Visual working memory (VWM) helps us hold a limited amount of visual info, but there's debate on how it's organized, with object-based theories claiming integration of features and feature-based theories suggesting independent representation.
  • Recent studies show features of an object can be forgotten independently, challenging the idea of perfect integration, but it's possible that features are still related when objects are remembered.
  • By creating a new task to assess two features at the same time, researchers found that feature representations are more dependent when processed simultaneously compared to sequentially, indicating a complex organization of features in VWM.

Article Abstract

Visual working memory (VWM) allows us to actively represent a limited amount of visual information in mind. Although its severe capacity limit is widely accepted, researchers disagree on the nature of its representational unit. Object-based theories argue that VWM organizes feature representations into integrated representations, whereas feature-based theories argue that VWM represents visual features independently. Supporting a feature-based account of VWM, recent studies have demonstrated that features comprising an object can be forgotten independently. Although evidence of feature-based forgetting invalidates a pure object-based account of VWM that assumes perfect integration of feature representations, it is possible that feature representations may be organized in a dependent manner on the basis of objects when they exist in memory. Furthermore, many previous studies prompted participants to recall object features independently by sequentially displaying a response probe for each feature (i.e., sequential estimation procedure), and this task demand might have promoted the independence of feature representations in VWM. To test these possibilities, we created a novel task to simultaneously capture the representational quality of two features of the same object (i.e., simultaneous estimation procedure) and tested their dependence across the entire spectrum of representational quality. Here, we found that the quality of feature representations within the same object covaried reliably in both sequential and simultaneous estimation procedures, but this representational dependence was statistically stronger in the simultaneous estimation procedure than in the sequential estimation procedure. Furthermore, we confirmed that neither the shared spatial location nor simultaneous estimation of two features was sufficient to induce representational dependence in VWM. Thus, our results demonstrate that feature representations in VWM are organized in a dependent manner on the basis of objects, but the degree of dependence can vary based on the current task demand.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2020.104579DOI Listing

Publication Analysis

Top Keywords

feature representations
24
simultaneous estimation
20
estimation procedure
20
visual working
8
working memory
8
representations
8
vwm
8
theories argue
8
argue vwm
8
features independently
8

Similar Publications

It has long been hypothesized that episodic memory supports adaptive decision making by enabling mental simulation of future events. Yet, attempts to characterize this process are surprisingly rare. On one hand, memory research is often carried out in settings that are far removed from ecological contexts of decision making.

View Article and Find Full Text PDF

The human visual system possesses a remarkable ability to detect and process faces across diverse contexts, including the phenomenon of face pareidolia--seeing faces in inanimate objects. Despite extensive research, it remains unclear why the visual system employs such broadly tuned face detection capabilities. We hypothesized that face pareidolia results from the visual system's optimization for recognizing both faces and objects.

View Article and Find Full Text PDF

Goal-directed behavior requires the effective suppression of distractions to focus on the task at hand. Although experimental evidence suggests that brain areas in the prefrontal and parietal lobe contribute to the selection of task-relevant and the suppression of task-irrelevant stimuli, how conspicuous distractors are encoded and effectively ignored remains poorly understood. We recorded neuronal responses from 2 regions in the prefrontal and parietal cortex of macaques, the frontal eye fields (FEFs) and the lateral intraparietal (LIP) area, during a visual search task, in the presence and absence of a salient distractor.

View Article and Find Full Text PDF

With the increasing number of patients with Alzheimer's Disease (AD), the demand for early diagnosis and intervention is becoming increasingly urgent. The traditional detection methods for Alzheimer's disease mainly rely on clinical symptoms, biomarkers, and imaging examinations. However, these methods have limitations in the early detection of Alzheimer's disease, such as strong subjectivity in diagnostic criteria, high detection costs, and high misdiagnosis rates.

View Article and Find Full Text PDF

Given the same external input, one's understanding of that input can differ based on internal contextual knowledge. Where and how does the brain represent latent belief frameworks that interact with incoming sensory information to shape subjective interpretations? In this study, participants listened to the same auditory narrative twice, with a plot twist in the middle that dramatically shifted their interpretations of the story. Using a robust within-subject whole-brain approach, we leveraged shifts in neural activity between the two listens to identify where latent interpretations are represented in the brain.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!