Publications by authors named "Julie D Golomb"

Article Synopsis
  • The study investigates how visual objects influence each other, specifically focusing on the representation of individual objects when surrounded by a group of similar items.
  • It found that individual objects appear smaller or larger based on the size of surrounding objects, demonstrating a "repulsive ensemble bias" influenced by both perceptual encoding and memory retention.
  • Results indicated that this bias was strongest shortly after presentation (0-50 ms) and reduced over time, showing that both immediate perception and memory maintenance play roles in how we perceive individual object sizes in a group context.
View Article and Find Full Text PDF

Despite advancements in artificial intelligence, object recognition models still lag behind in emulating visual information processing in human brains. Recent studies have highlighted the potential of using neural data to mimic brain processing; however, these often rely on invasive neural recordings from non-human subjects, leaving a critical gap in understanding human visual perception. Addressing this gap, we present, for the first time, 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG, demonstrating a significantly higher similarity to human brain representations.

View Article and Find Full Text PDF

In adults, spatial location plays a special role in visual object processing. People are more likely to judge two sequentially presented objects as being identical when they appear in the same location compared to in different locations (a phenomenon referred to as Spatial Congruency Bias [SCB]). However, no comparable Identity Congruency Bias (ICB) is found, suggesting an asymmetric location-identity relationship in object binding.

View Article and Find Full Text PDF

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about if and how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher level object-location binding. But these studies are generally conducted in fairly static experimental contexts.

View Article and Find Full Text PDF

Remarkably, human brains have the ability to accurately perceive and process the real-world size of objects, despite vast differences in distance and perspective. While previous studies have delved into this phenomenon, distinguishing this ability from other visual perceptions, like depth, has been challenging. Using the THINGS EEG2 dataset with high time-resolution human brain recordings and more ecologically valid naturalistic stimuli, our study uses an innovative approach to disentangle neural representations of object real-world size from retinal size and perceived real-world depth in a way that was not previously possible.

View Article and Find Full Text PDF

We are often bombarded with salient stimuli that capture our attention and distract us from our current goals. Decades of research have shown the robust detrimental impacts of salient distractors on search performance and, of late, in leading to altered feature perception. These feature errors can be quite extreme, and thus, undesirable.

View Article and Find Full Text PDF

Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention.

View Article and Find Full Text PDF

Previous studies have posited that spatial location plays a special role in object recognition. Notably, the "spatial congruency bias (SCB)" is a tendency to report objects as the same identity if they are presented at the same location, compared to different locations. Here we found that even when statistical regularities were manipulated in the opposite direction (objects in the same location were three times more likely to be different identities), subjects still exhibited a robust SCB (more likely to report them as the same identity).

View Article and Find Full Text PDF

Learning to ignore distractors is critical for navigating the visual world. Research has suggested that a location frequently containing a salient distractor can be suppressed. How does such suppression work? Previous studies provided evidence for proactive suppression, but methodological limitations preclude firm conclusions.

View Article and Find Full Text PDF

Our visual systems rapidly perceive and integrate information about object identities and locations. There is long-standing debate about how we achieve world-centered (spatiotopic) object representations across eye movements, with many studies reporting persistent retinotopic (eye-centered) effects even for higher-level object-location binding. But these studies are generally conducted in fairly static experimental contexts.

View Article and Find Full Text PDF

Spatial attention affects not only where we look, but also what we perceive and remember in attended and unattended locations. Previous work has shown that manipulating attention via top-down cues or bottom-up capture leads to characteristic patterns of feature errors. Here we investigated whether experience-driven attentional guidance-and probabilistic attentional guidance more generally-leads to similar feature errors.

View Article and Find Full Text PDF

Most models in cognitive and computational neuroscience trained on one subject do not generalize to other subjects due to individual differences. An ideal individual-to-individual neural converter is expected to generate real neural signals of one subject from those of another one, which can overcome the problem of individual differences for cognitive and computational models. In this study, we propose a novel individual-to-individual EEG converter, called EEG2EEG, inspired by generative models in computer vision.

View Article and Find Full Text PDF

This opinion piece is part of a collection on the topic: "What is attention?" Despite the word's place in the common vernacular, a satisfying definition for "attention" remains elusive. Part of the challenge is there exist many different types of attention, which may or may not share common mechanisms. Here we review this literature and offer an intuitive definition that draws from aspects of prior theories and models of attention but is broad enough to recognize the various types of attention and modalities it acts upon: attention as a multi-level system of weights and balances.

View Article and Find Full Text PDF

Our behavioral goals shape how we process information via attentional filters that prioritize goal-relevant information, dictating both where we attend and what we attend to. When something unexpected or salient appears in the environment, it captures our spatial attention. Extensive research has focused on the spatiotemporal aspects of attentional capture, but what happens to concurrent nonspatial filters during visual distraction? Here, we demonstrate a novel, broader consequence of distraction: widespread disruption to filters that regulate category-specific object processing.

View Article and Find Full Text PDF

Attention is dynamic, constantly shifting between different locations - sometimes imperfectly. How do goal-driven expectations impact dynamic spatial attention? A previous study (Dowd & Golomb, Psychological Science, 30(3), 343-361, 2019) explored object-feature binding when covert attention needed to be either maintained at a single location or shifted from one location to another. In addition to revealing feature-binding errors during dynamic shifts of attention, this study unexpectedly found that participants sometimes made correlated errors on trials when they did not have to shift attention, mistakenly reporting the features and location of an object at a different location.

View Article and Find Full Text PDF
Visual Remapping.

Annu Rev Vis Sci

September 2021

Our visual system is fundamentally retinotopic. When viewing a stable scene, each eye movement shifts object features and locations on the retina. Thus, sensory representations must be updated, or remapped, across saccades to align presaccadic and postsaccadic inputs.

View Article and Find Full Text PDF

Given the complexity of our visual environments, a number of mechanisms help us prioritize goal-consistent visual information. When searching for a friend in a crowd, for instance, visual working memory (VWM) maintains a representation of your target (i.e.

View Article and Find Full Text PDF

How are humans capable of maintaining detailed representations of visual items in memory? When required to make fine discriminations, we sometimes implicitly differentiate memory representations away from each other to reduce interitem confusion. However, this separation of representations can inadvertently lead memories to be recalled as biased away from other memory items, a phenomenon termed repulsion bias. Using a nonretinotopically specific working memory paradigm, we found stronger repulsion bias with longer working memory delays, but only when items were actively maintained.

View Article and Find Full Text PDF

We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location ("hold attention") or shifted attention to another location midway through the trial ("shift attention").

View Article and Find Full Text PDF

Humans use regularities in the environment to facilitate learning, often without awareness or intent. How might such regularities distort long-term memory? Here, participants studied and reported the colors of objects in a long-term memory paradigm, uninformed that certain colors were sampled more frequently overall. When participants misreported an object's color, these errors were often centered around the average studied color (i.

View Article and Find Full Text PDF

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli.

View Article and Find Full Text PDF

We live in a dynamic, distracting world. When distracting information captures attention, what are the consequences for perception? Previous literature has focused on effects such as reaction time (RT) slowing, accuracy decrements, and oculomotor capture by distractors. In the current study, we asked whether attentional capture by distractors can also more fundamentally alter target feature representations, and if so, whether participants are aware of such errors.

View Article and Find Full Text PDF

Spatial attention is thought to be the "glue" that binds features together (e.g., Treisman & Gelade, 1980, Psychology, 12[1], 97-136)-but attention is dynamic, constantly moving across multiple goals and locations.

View Article and Find Full Text PDF

How do we maintain visual stability across eye movements? Much work has focused on how visual information is rapidly updated to maintain spatiotopic representations. However, predictive spatial remapping is only part of the story. Here I review key findings, recent debates, and open questions regarding remapping and its implications for visual attention and perception.

View Article and Find Full Text PDF