What type of conceptual information about an object do we get at a brief glance? In two experiments, we investigated the nature of conceptual tokening-the moment at which conceptual information about an object is accessed. Using a masked picture-word congruency task with dichoptic presentations at "brief" (50-60 ms) and "long" (190-200 ms) durations, participants judged the relation between a picture (e.g., a banana) and a word representing one of four property types about the object: superordinate (fruit), basic level (banana), a high-salient (yellow), or low-salient feature (peel). In Experiment 1, stimuli were presented in black-and-white; in Experiment 2, they were presented in red and blue, with participants wearing red-blue anaglyph glasses. This manipulation allowed for the independent projection of stimuli to the left- and right-hemisphere visual areas, aiming to probe the early effects of these projections in conceptual tokening. Results showed that superordinate and basic-level properties elicited faster and more accurate responses than high- and low-salient features at both presentation times. This advantage persisted even when the objects were divided into categories (e.g., animals, vegetables, vehicles, tools), and when objects contained high-salient visual features. However, contrasts between categories show that animals, fruits, and vegetables tend to be categorized at the superordinate level, while vehicles tend to be categorized at the basic level. Also, for a restricted class of objects, high-salient features representing diagnostic color information (yellow for the picture of a banana) facilitated congruency judgments to the same extent as that of superordinate and basic-level labels. We suggest that early access to object concepts yields superordinate and basic-level information, with features only yielding effects at a later stage of processing, unless they represent diagnostic color information. We discuss these results advancing a unified theory of conceptual representation, integrating key postulates of atomism and feature-based theories.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/cogs.70002 | DOI Listing |
Sci Rep
January 2025
Bates College Program in Neuroscience, Bates College, Lewiston, ME, USA.
Cogn Sci
December 2024
Department of Psychology, Maynooth University.
People are generally more accurate at categorizing objects at the basic level (e.g., dog) than at more general, superordinate categories (e.
View Article and Find Full Text PDFPsychol Rev
December 2024
Department of Psychology, University of California, Berkeley.
Object individuation studies have been a valuable tool in understanding the development of kind concepts. In this article, we review evidence from object individuation paradigms to argue that by their first birthday, infants represent at least three superordinate-level sortal kinds: OBJECT, ANIMATE, and AGENT (possibly also ARTIFACT). These superordinate sortal-kind concepts share key characteristics of adult kind concepts, such as prioritizing causal properties and having inductive potential.
View Article and Find Full Text PDFSci Rep
November 2024
Laboratory of Behavioral and Cognitive Neuroscience, Stanford University, Stanford, CA, USA.
In this study, we examined the relatively unexplored realm of face perception, investigating the activities within human brain face-selective regions during the observation of faces at both subordinate and superordinate levels. We recorded intracranial EEG signals from the ventral temporal cortex in neurosurgical patients implanted with subdural electrodes during viewing of face subcategories (human, mammal, bird, and marine faces) as well as various non-face control stimuli. The results revealed a noteworthy correlation in response patterns across all face-selective areas in the ventral temporal cortex, not only within the same face category but also extending to different face categories.
View Article and Find Full Text PDFCogn Sci
October 2024
Department of Psychology, Concordia University.
What type of conceptual information about an object do we get at a brief glance? In two experiments, we investigated the nature of conceptual tokening-the moment at which conceptual information about an object is accessed. Using a masked picture-word congruency task with dichoptic presentations at "brief" (50-60 ms) and "long" (190-200 ms) durations, participants judged the relation between a picture (e.g.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!