Visual attention is widely considered a vital factor in the perception and analysis of a visual scene. Several studies explored the effects and mechanisms of top-down attention, but the mechanisms that determine the attentional signal are less explored. By developing a neuro-computational model of visual attention including the visual cortex-basal ganglia loop, we demonstrate how attentional alignment can evolve based on dopaminergic reward during a visual search task. Unlike most previous modeling studies of feature-based attention, we do not implement a manually predefined attention template. Dopamine-modulated covariance learning enable the basal ganglia to learn rewarded associations between the visual input and the attentional gain represented in the PFC of the model. Hence, the model shows human-like performance on a visual search task by optimally tuning the attention signal. In particular, similar as in humans, this reward-based tuning in the model leads to an attentional template that is not centered on the target feature, but a relevant feature deviating away from the target due to the presence of highly similar distractors. Further analyses of the model shows, attention is mainly guided by the signal-to-noise ratio between target and distractors.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2021.07.008DOI Listing

Publication Analysis

Top Keywords

neuro-computational model
8
visual
8
model visual
8
visual cortex-basal
8
visual attention
8
visual search
8
search task
8
attention
7
model
6
optimal attention
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!