AI Article Synopsis

  • People can remember many real-world objects in visual long-term memory, but it's unclear if they store them as whole entities or as separate features.
  • Experiments showed that participants could recognize objects and their features but often confused their states (like open vs. closed), indicating these features are not tightly linked in memory.
  • Further studies confirmed that while people recall objects well, they differentiate between actual object states and features independently, suggesting objects are stored as bundles of features rather than singular units.

Article Abstract

People can store thousands of real-world objects in visual long-term memory with high precision. But are these objects stored as unitary, bound entities, as often assumed, or as bundles of separable features? We tested this in several experiments. In the first series of studies, participants were instructed to remember specific exemplars of real-world objects presented in a particular state (e.g., open/closed, full/empty, etc.), and then were asked to recognize either which exemplars they had seen (e.g., I saw this coffee mug), or which exemplar-state conjunctions they had seen (e.g., I saw this coffee mug and it was full). Participants had a large number of within-category confusions, for example misremembering which states went with which exemplars, while simultaneously showing strong memory for the features themselves (e.g., which states they had seen, which exemplars they had seen). In a second series of studies, we found further evidence of independence: participants were very good at remembering which exemplars they had seen independently of whether these items were presented in a new or old state, but the same did not occur for features known to be truly holistically represented. Thus, we find through 2 lines of evidence that the features of real-world objects that support exemplar discrimination and state discrimination are not bound, suggesting visual objects are not inherently unitary entities in memory. (PsycINFO Database Record (c) 2020 APA, all rights reserved).

Download full-text PDF

Source
http://dx.doi.org/10.1037/xge0000664DOI Listing

Publication Analysis

Top Keywords

real-world objects
16
features real-world
8
long-term memory
8
series studies
8
presented state
8
states exemplars
8
objects
6
exemplars
5
independent storage
4
features
4

Similar Publications

MEVDT: Multi-modal event-based vehicle detection and tracking dataset.

Data Brief

February 2025

Department of Electrical and Computer Engineering, University of Michigan-Dearborn, 4901 Evergreen Rd, Dearborn, 48128 MI, USA.

In this data article, we introduce the Multi-Modal Event-based Vehicle Detection and Tracking (MEVDT) dataset. This dataset provides a synchronized stream of event data and grayscale images of traffic scenes, captured using the Dynamic and Active-Pixel Vision Sensor (DAVIS) 240c hybrid event-based camera. MEVDT comprises 63 multi-modal sequences with approximately 13k images, 5M events, 10k object labels, and 85 unique object tracking trajectories.

View Article and Find Full Text PDF

GFA-Net: Geometry-Focused Attention Network for Six Degrees of Freedom Object Pose Estimation.

Sensors (Basel)

December 2024

Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China.

Six degrees of freedom (6-DoF) object pose estimation is essential for robotic grasping and autonomous driving. While estimating pose from a single RGB image is highly desirable for real-world applications, it presents significant challenges. Many approaches incorporate supplementary information, such as depth data, to derive valuable geometric characteristics.

View Article and Find Full Text PDF

Apricot trees, serving as critical agricultural resources, hold a significant role within the agricultural domain. Conventional methods for detecting pests and diseases in these trees are notably labor-intensive. Many conditions affecting apricot trees manifest distinct visual symptoms that are ideally suited for precise identification and classification via deep learning techniques.

View Article and Find Full Text PDF

When rendering the visual scene for near-eye head-mounted displays, accurate knowledge of the geometry of the displays, scene objects, and eyes is required for the correct generation of the binocular images. Despite possible design and calibration efforts, these quantities are subject to positional and measurement errors, resulting in some misalignment of the images projected to each eye. Previous research investigated the effects in virtual reality (VR) setups that triggered such symptoms as eye strain and nausea.

View Article and Find Full Text PDF

Point cloud registration is pivotal across various applications, yet traditional methods rely on unordered point clouds, leading to significant challenges in terms of computational complexity and feature richness. These methods often use k-nearest neighbors (KNN) or neighborhood ball queries to access local neighborhood information, which is not only computationally intensive but also confines the analysis within the object's boundary, making it difficult to determine if points are precisely on the boundary using local features alone. This indicates a lack of sufficient local feature richness.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!