Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature processing in human subjects attending to objects from six semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100-150 ms after stimulus onset. This fast and overlapping processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature processing, yet before shape coding. Categorical information is represented both before and after shape, suggesting a role for this feature in the refinement of categorical matching.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC6527710PMC
http://dx.doi.org/10.1038/s41598-019-43956-3DOI Listing

Publication Analysis

Top Keywords

low-level properties
8
properties contrast
8
contrast spatial
8
spatial frequencies
8
feature processing
8
common spatiotemporal
4
processing
4
spatiotemporal processing
4
processing visual
4
visual features
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!