The formation of categories is known to distort perceptual space: representations are pushed away from category boundaries and pulled toward categorical prototypes. This phenomenon has been studied with artificially constructed objects, whose feature dimensions are easily defined and manipulated. How such category-induced perceptual distortions arise for complex, real-world scenes, however, remains largely unknown due to the technical challenge of measuring and controlling scene features. We address this question by generating realistic scene images from a high-dimensional continuous space using generative adversarial networks and using the images as stimuli in a novel learning task. Participants learned to categorize the scene images along arbitrary category boundaries and later reconstructed the same scenes from memory. Systematic biases in reconstruction errors closely tracked each participant's subjective category boundaries. These findings suggest that the perception of global scene properties is warped to align with a newly learned category structure after only a brief learning experience.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3758/s13423-024-02484-6 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!