Variability in training unlocks generalization in visual perceptual learning through invariant representations.

Curr Biol

Neural Circuits and Cognition Lab, European Neuroscience Institute Göttingen, A Joint Initiative of the University Medical Center Göttingen and the Max Planck Society, Grisebachstraße 5, 37077 Göttingen, Germany; Perception and Plasticity Group, German Primate Center, Leibniz Institute for Primate Research, Kellnerweg 4, 37077 Göttingen, Germany. Electronic address:

Published: March 2023

Stimulus and location specificity are long considered hallmarks of visual perceptual learning. This renders visual perceptual learning distinct from other forms of learning, where generalization can be more easily attained, and therefore unsuitable for practical applications, where generalization is key. Based on the hypotheses derived from the structure of the visual system, we test here whether stimulus variability can unlock generalization in perceptual learning. We train subjects in orientation discrimination, while we vary the amount of variability in a task-irrelevant feature, spatial frequency. We find that, independently of task difficulty, this manipulation enables generalization of learning to new stimuli and locations, while not negatively affecting the overall amount of learning on the task. We then use deep neural networks to investigate how variability unlocks generalization. We find that networks develop invariance to the task-irrelevant feature when trained with variable inputs. The degree of learned invariance strongly predicts generalization. A reliance on invariant representations can explain variability-induced generalization in visual perceptual learning. This suggests new targets for understanding the neural basis of perceptual learning in the higher-order visual cortex and presents an easy-to-implement modification of common training paradigms that may benefit practical applications.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cub.2023.01.011DOI Listing

Publication Analysis

Top Keywords

perceptual learning
24
visual perceptual
16
learning
9
generalization
8
unlocks generalization
8
generalization visual
8
invariant representations
8
practical applications
8
task-irrelevant feature
8
visual
6

Similar Publications

Abstract visual reasoning based on algebraic methods.

Sci Rep

January 2025

School of Computer Science and Technology, Donghua University, Shanghai, 201620, China.

Extracting high-order abstract patterns from complex high-dimensional data forms the foundation of human cognitive abilities. Abstract visual reasoning involves identifying abstract patterns embedded within composite images, considered a core competency of machine intelligence. Traditional neuro-symbolic methods often infer unknown objects through data fitting, without fully exploring the abstract patterns within composite images and the sequential sensitivity of visual sequences.

View Article and Find Full Text PDF

Adapting a style based generative adversarial network to create images depicting cleft lip deformity.

Sci Rep

January 2025

Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, C1-121, Al Gharrafa St, Ar Rayyan, Doha, Qatar.

Training a machine learning system to evaluate any type of facial deformity is impeded by the scarcity of large datasets of high-quality, ethics board-approved patient images. We have built a deep learning-based cleft lip generator called CleftGAN designed to produce an almost unlimited number of high-fidelity facsimiles of cleft lip facial images with wide variation. A transfer learning protocol testing different versions of StyleGAN as the base model was undertaken.

View Article and Find Full Text PDF

There is an open debate on the role of artificial networks to understand the visual brain. Internal representations of images in artificial networks develop human-like properties. In particular, evaluating distortions using differences between internal features is correlated to human perception of distortion.

View Article and Find Full Text PDF

Can non-human primates extract the linear trend from a noisy scatterplot?

iScience

January 2025

Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France.

Recent studies showed that humans, regardless of age, education, and culture, can extract the linear trend of a noisy scatterplot. Although this capacity looks sophisticated, it may simply reflect the extraction of the principal trend of the graph, as if the cloud of dots was processed as an oriented object. To test this idea, we trained Guinea baboons to associate arbitrary shapes with the increasing or decreasing trends of noiseless and noisy scatterplots, while varying the number of points, the noise level, and the regression slope.

View Article and Find Full Text PDF

Digital health (DH) and artificial intelligence (AI) in healthcare are rapidly evolving but were addressed synonymously by many healthcare authorities and practitioners. A deep understanding and clarification of these concepts are fundamental and a prerequisite for developing robust frameworks and practical guidelines to ensure the safety, efficacy, and effectiveness of DH solutions and AI-embedded technologies. Categorizing DH into technologies (DHTs) and services (DHSs) enables regulatory, HTA, and reimbursement bodies to develop category-specific frameworks and guidelines for evaluating these solutions effectively.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!