We propose a novel modeling framework for characterizing the time course of change detection based on information held in visual short-term memory (VSTM). Specifically, we seek to answer whether change detection is better captured by a first-order integration model, in which information is pooled from each location, or a second-order integration model, in which each location is processed independently. We diagnose whether change detection across locations proceeds in serial or parallel and how processing is affected by the stopping rule (i.e., detecting any change vs. detecting all changes; Experiment 1) and how the efficiency of detection is affected by the number of changes in the display (Experiment 2). We find that although capacity is generally limited in both tasks, the architecture varies from parallel self-terminating in the OR task to serial self-terminating in the AND task. Our novel framework allows model comparisons across a large set of models ruling out several competing explanations of change detection. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/rev0000306DOI Listing

Publication Analysis

Top Keywords

change detection
20
characterizing time
8
time course
8
integration model
8
self-terminating task
8
change
6
detection
6
course decision-making
4
decision-making change
4
detection propose
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!