Publications by authors named "George A Alvarez"

The rapid release of high-performing computer vision models offers new potential to study the impact of different inductive biases on the emergent brain alignment of learned representations. Here, we perform controlled comparisons among a curated set of 224 diverse models to test the impact of specific model properties on visual brain predictivity - a process requiring over 1.8 billion regressions and 50.

View Article and Find Full Text PDF

Modular and distributed coding theories of category selectivity along the human ventral visual stream have long existed in tension. Here, we present a reconciling framework-contrastive coding-based on a series of analyses relating category selectivity within biological and artificial neural networks. We discover that, in models trained with contrastive self-supervised objectives over a rich natural image diet, category-selective tuning naturally emerges for faces, bodies, scenes, and words.

View Article and Find Full Text PDF

Over the last few decades, psychologists have developed precise quantitative models of human recall performance in visual working memory (VWM) tasks. However, these models are tailored to a particular class of artificial stimulus displays and simple feature reports from participants (e.g.

View Article and Find Full Text PDF

Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can this representation be formed purely through domain-general learning of natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find that category information implicitly emerges in the local similarity structure of this feature space.

View Article and Find Full Text PDF

Although most visual aesthetic preferences are likely driven by a mix of personal, historical, and cultural factors, there are exceptions: some may be driven by adaptive mechanisms of visual processing, and so may be relatively consistent across people, contexts, and time. An especially powerful example is the "inward bias": when a framed image contains a figure (e.g.

View Article and Find Full Text PDF

Attentional tracking and working memory tasks are often performed better when targets are divided evenly between the left and right visual hemifields, rather than contained within a single hemifield (Alvarez & Cavanagh, 2005; Delvenne, 2005). However, this bilateral field advantage does not provide conclusive evidence of hemifield-specific control of attention and working memory, because it can be explained solely from hemifield-limited spatial interference at early stages of visual processing. If control of attention and working memory is specific to each hemifield, maintaining target information should become more difficult as targets move between the two hemifields.

View Article and Find Full Text PDF

The alarm has been raised on so-called driverless dilemmas, in which autonomous vehicles will need to make high-stakes ethical decisions on the road. We argue that these arguments are too contrived to be of practical use, are an inappropriate method for making decisions on issues of safety, and should not be used to inform engineering or policy.

View Article and Find Full Text PDF

Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location.

View Article and Find Full Text PDF

How people process images is known to affect memory for those images, but these effects have typically been studied using explicit task instructions to vary encoding. Here, we investigate the effects of intrinsic variation in processing on subsequent memory, testing whether recognizing an ambiguous stimulus as meaningful (as a face vs as shape blobs) predicts subsequent visual memory even when matching the perceptual features and the encoding strategy between subsequently remembered and subsequently forgotten items. We show in adult humans of either sex that single trial EEG activity can predict whether participants will subsequently remember an ambiguous Mooney face image (e.

View Article and Find Full Text PDF

To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display.

View Article and Find Full Text PDF

While substantial work has focused on how the visual system achieves basic-level recognition, less work has asked about how it supports large-scale distinctions between objects, such as animacy and real-world size. Previous work has shown that these dimensions are reflected in our neural object representations (Konkle & Caramazza, 2013), and that objects of different real-world sizes have different mid-level perceptual features (Long, Konkle, Cohen, & Alvarez, 2016). Here, we test the hypothesis that animates and manmade objects also differ in mid-level perceptual features.

View Article and Find Full Text PDF

Cognitive training has become a billion-dollar industry with the promise that exercising a cognitive faculty (e.g., attention) on simple "brain games" will lead to improvements on any task relying on the same faculty.

View Article and Find Full Text PDF

Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects.

View Article and Find Full Text PDF

Unlabelled: Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics.

View Article and Find Full Text PDF

Confidence in our memories is influenced by many factors, including beliefs about the perceptibility or memorability of certain kinds of objects and events, as well as knowledge about our skill sets, habits, and experiences. Notoriously, our knowledge and beliefs about memory can lead us astray, causing us to be overly confident in eyewitness testimony or to overestimate the frequency of recent experiences. Here, using visual working memory as a case study, we stripped away all these potentially misleading cues, requiring observers to make confidence judgments by directly assessing the quality of their memory representations.

View Article and Find Full Text PDF

Visual working memory is the cognitive system that holds visual information active to make it resistant to interference from new perceptual input. Information about simple stimuli-colors and orientations-is encoded into working memory rapidly: In under 100 ms, working memory ‟fills up," revealing a stark capacity limit. However, for real-world objects, the same behavioral limits do not hold: With increasing encoding time, people store more real-world objects and do so with more detail.

View Article and Find Full Text PDF

Can attention alter the impression of a face? Previous studies showed that attention modulates the appearance of lower-level visual features. For instance, attention can make a simple stimulus appear to have higher contrast than it actually does. We tested whether attention can also alter the perception of a higher-order property-namely, facial attractiveness.

View Article and Find Full Text PDF

Is working memory capacity determined by an immutable limit-for example, 4 memory storage slots? The fact that performance is typically unaffected by task instructions has been taken as support for such structural models of memory. Here, we modified a standard working memory task to incentivize participants to remember more items. Participants were asked to remember a set of colors over a short retention interval.

View Article and Find Full Text PDF

Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.

View Article and Find Full Text PDF

Influential slot and resource models of visual working memory make the assumption that items are stored in memory as independent units, and that there are no interactions between them. Consequently, these models predict that the number of items to be remembered (the set size) is the primary determinant of working memory performance, and therefore these models quantify memory capacity in terms of the number and quality of individual items that can be stored. Here we demonstrate that there is substantial variance in display difficulty within a single set size, suggesting that limits based on the number of individual items alone cannot explain working memory storage.

View Article and Find Full Text PDF

Human cognition has a limited capacity that is often attributed to the brain having finite cognitive resources, but the nature of these resources is usually not specified. Here, we show evidence that perceptual interference between items can be predicted by known receptive field properties of the visual cortex, suggesting that competition within representational maps is an important source of the capacity limitations of visual processing. Across the visual hierarchy, receptive fields get larger and represent more complex, high-level features.

View Article and Find Full Text PDF

Much is known about visual search for single targets, but relatively little about how participants "forage" for multiple targets. One important question is how long participants will search before moving to a new display. Evidence suggests that participants should leave when intake drops below the average rate ("optimal foraging," Charnov, 1976).

View Article and Find Full Text PDF

Visual perception and awareness have strict limitations. We suggest that one source of these limitations is the representational architecture of the visual system. Under this view, the extent to which items activate the same neural channels constrains the amount of information that can be processed by the visual system and ultimately reach awareness.

View Article and Find Full Text PDF

Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined.

View Article and Find Full Text PDF

A central question for models of visual working memory is whether the number of objects people can remember depends on object complexity. Some influential "slot" models of working memory capacity suggest that people always represent 3-4 objects and that only the fidelity with which these objects are represented is affected by object complexity. The primary evidence supporting this claim is the finding that people can detect large changes to complex objects (consistent with remembering at least 4 individual objects), but that small changes cannot be detected (consistent with low-resolution representations).

View Article and Find Full Text PDF