Publications by authors named "Nicole Rust"

An impactful understanding of the brain will require entirely new approaches and unprecedented collaborative efforts. The next steps will require brain researchers to develop theoretical frameworks that allow them to tease apart dependencies and causality in complex dynamical systems, as well as the ability to maintain awe while not getting lost in the effort. The outstanding question is: How do we go about it?

View Article and Find Full Text PDF

Neuroscience has a long history of investigating the neural correlates of brain functions. One example is fear, which has been studied intensely in a variety of species. In parallel, unease about definitions of brain functions has existed for over 100 years.

View Article and Find Full Text PDF

In neuroscience, the term 'causality' is used to refer to different concepts, leading to confusion. Here we illustrate some of those variations, and we suggest names for them. We then introduce four ways to enhance clarity around causality in neuroscience.

View Article and Find Full Text PDF

Causal perturbations provide the strongest tests of the relationships between brain mechanism and brain function. In cognitive neuroscience, persuasive causal perturbations are difficult to achieve. In a recent paper, Ni et al.

View Article and Find Full Text PDF

Although we are continuously bombarded with visual input, only a fraction of incoming visual events is perceived, remembered or acted on. The neural underpinnings of various forms of visual priority coding, including perceptual expertise, goal-directed attention, visual salience, image memorability and preferential looking, have been studied. Here, we synthesize information from these different examples to review recent developments in our understanding of visual priority coding and its neural correlates, with a focus on the role of behaviour to evaluate candidate correlates.

View Article and Find Full Text PDF

In addition to the role that our visual system plays in determining what we are seeing right now, visual computations contribute in important ways to predicting what we will see next. While the role of memory in creating future predictions is often overlooked, efficient predictive computation requires the use of information about the past to estimate future events. In this article, we introduce a framework for understanding the relationship between memory and visual prediction and review the two classes of mechanisms that the visual system relies on to create future predictions.

View Article and Find Full Text PDF

Memories of the images that we have seen are thought to be reflected in the reduction of neural responses in high-level visual areas such as inferotemporal (IT) cortex, a phenomenon known as repetition suppression (RS). We challenged this hypothesis with a task that required rhesus monkeys to report whether images were novel or repeated while ignoring variations in contrast, a stimulus attribute that is also known to modulate the overall IT response. The monkeys' behavior was largely contrast invariant, contrary to the predictions of an RS-inspired decoder, which could not distinguish responses to images that are repeated from those that are of lower contrast.

View Article and Find Full Text PDF

Appending a Citation Diversity Statement to a paper is a simple and effective way to increase awareness about citation bias and help mitigate it. Here, we describe why reducing citation bias is important and how to include a Citation Diversity Statement in your next publication.

View Article and Find Full Text PDF

Why are some images easier to remember than others? Here, we review recent developments in our understanding of 'image memorability', including its behavioral characteristics, its neural correlates, and the optimization principles from which it originates. We highlight work that has used large behavioral data sets to leverage memorability scores computed for individual images. These studies demonstrate that the mapping of image content to image memorability is not only predictable, but also non-intuitive and multifaceted.

View Article and Find Full Text PDF

Searching for a specific visual object requires our brain to compare the items in view with a remembered representation of the sought target to determine whether a target match is present. This comparison is thought to be implemented, in part, via the combination of top-down modulations reflecting target identity with feed-forward visual representations. However, it remains unclear whether top-down signals are integrated at a single locus within the ventral visual pathway (e.

View Article and Find Full Text PDF

A strong preference for novelty emerges in infancy and is prevalent across the animal kingdom. When incorporated into reinforcement-based machine learning algorithms, visual novelty can act as an intrinsic reward signal that vastly increases the efficiency of exploration and expedites learning, particularly in situations where external rewards are difficult to obtain. Here we review parallels between recent developments in novelty-driven machine learning algorithms and our understanding of how visual novelty is computed and signaled in the primate brain.

View Article and Find Full Text PDF

Most accounts of image and object encoding in inferotemporal cortex (IT) focus on the distinct patterns of spikes that different images evoke across the IT population. By analyzing data collected from IT as monkeys performed a visual memory task, we demonstrate that variation in a complementary coding scheme, the magnitude of the population response, can largely account for how well images will be remembered. To investigate the origin of IT image memorability modulation, we probed convolutional neural network models trained to categorize objects.

View Article and Find Full Text PDF

Task performance is determined not only by the amount of task-relevant signal present in our brains but also by the presence of noise, which can arise from multiple sources. Internal noise, or "trial variability," manifests as trial-by-trial variations in neural responses under seemingly identical conditions. External factors can also translate into noise, particularly when a task requires extraction of a particular type of information from our environment amid changes in other task-irrelevant "nuisance" parameters.

View Article and Find Full Text PDF

Finding a sought visual target object requires combining visual information about a scene with a remembered representation of the target to create a "target match" signal that indicates when a target is in view. Target match signals have been reported to exist within high-level visual brain areas including inferotemporal cortex (IT), where they are mixed with representations of image and object identity. However, these signals are not well understood, particularly in the context of the real-world challenge that the objects we search for typically appear at different positions, sizes, and within different background contexts.

View Article and Find Full Text PDF

Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed a single-exposure visual memory task designed to measure the rates of forgetting with time. We found that a weighted linear read-out of IT was a better predictor of the monkeys' forgetting rates and reaction time patterns than a strict instantiation of the repetition suppression hypothesis, expressed as a total spike count scheme.

View Article and Find Full Text PDF

Like primates, the rat brain areas thought to be involved in visual object recognition are arranged in a hierarchy.

View Article and Find Full Text PDF

Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding performance on prespecified tasks. Here we develop a biologically plausible decoding model for classification tasks, that we refer to as neural quadratic discriminant analysis (nQDA).

View Article and Find Full Text PDF

Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10-15 ms.

View Article and Find Full Text PDF

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components.

View Article and Find Full Text PDF

Finding sought visual targets requires our brains to flexibly combine working memory information about what we are looking for with visual information about what we are looking at. To investigate the neural computations involved in finding visual targets, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a task that required them to find targets in sequences of distractors. We found similar amounts of total task-specific information in both areas; however, information about whether a target was in view was more accessible using a linear read-out or, equivalently, was more untangled in PRH.

View Article and Find Full Text PDF

Although popular accounts suggest that neurons along the ventral visual processing stream become increasingly selective for particular objects, this appears at odds with the fact that inferior temporal cortical (IT) neurons are broadly tuned. To explore this apparent contradiction, we compared processing in two ventral stream stages (visual cortical areas V4 and IT) in the rhesus macaque monkey. We confirmed that IT neurons are indeed more selective for conjunctions of visual features than V4 neurons and that this increase in feature conjunction selectivity is accompanied by an increase in tolerance ("invariance") to identity-preserving transformations (e.

View Article and Find Full Text PDF

Mounting evidence suggests that 'core object recognition,' the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains poorly understood. Here we review evidence ranging from individual neurons and neuronal populations to behavior and computational models.

View Article and Find Full Text PDF

Most neurons in cortical area MT (V5) are strongly direction selective, and their activity is closely associated with the perception of visual motion. These neurons have large receptive fields built by combining inputs with smaller receptive fields that respond to local motion. Humans integrate motion over large areas and can perceive what has been referred to as global motion.

View Article and Find Full Text PDF

Our ability to recognize objects despite large changes in position, size, and context is achieved through computations that are thought to increase both the shape selectivity and the tolerance ("invariance") of the visual representation at successive stages of the ventral pathway [visual cortical areas V1, V2, and V4 and inferior temporal cortex (IT)]. However, these ideas have proven difficult to test. Here, we consider how well population activity patterns at two stages of the ventral stream (V4 and IT) discriminate between, and generalize across, different images.

View Article and Find Full Text PDF

The visual system is tasked with extracting stimulus content (e.g. the identity of an object) from the spatiotemporal light pattern falling on the retina.

View Article and Find Full Text PDF