Publications by authors named "Gregory C Deangelis"

Motion provides a powerful sensory cue for segmenting a visual scene into objects and inferring the causal relationships between objects. Fundamental mechanisms involved in this process are the integration and segmentation of local motion signals. However, the computations that govern whether local motion signals are perceptually integrated or segmented remain unclear.

View Article and Find Full Text PDF

For the brain to compute object motion in the world during self-motion, it must discount the global patterns of image motion (optic flow) caused by self-motion. Optic flow parsing is a proposed visual mechanism for computing object motion in the world, and studies in both humans and monkeys have demonstrated perceptual biases consistent with the operation of a flow-parsing mechanism. However, the neural basis of flow parsing remains unknown.

View Article and Find Full Text PDF

Elucidating the neural basis of perceptual biases, such as those produced by visual illusions, can provide powerful insights into the neural mechanisms of perceptual inference. However, studying the subjective percepts of animals poses a fundamental challenge: unlike human participants, animals cannot be verbally instructed to report what they see, hear, or feel. Instead, they must be trained to perform a task for reward, and researchers must infer from their responses what the animal perceived.

View Article and Find Full Text PDF

Neurons throughout the brain modulate their firing rate lawfully in response to sensory input. Theories of neural computation posit that these modulations reflect the outcome of a constrained optimization in which neurons aim to robustly and efficiently represent sensory information. Our understanding of how this optimization varies across different areas in the brain, however, is still in its infancy.

View Article and Find Full Text PDF
Article Synopsis
  • The text discusses the challenge of defining motion based on different reference frames (like eye position or external surroundings) and how existing studies have produced mixed results on this topic.
  • A new hierarchical Bayesian model is introduced that translates retinal velocities into perceived velocities, aligning with the natural structure of how visual elements move together in related reference frames.
  • The model not only segments visual inputs but also supports predictions through experiments, helping to identify how individual observers perceive motion and providing a foundation for enhancing visual processing models using Gestalt principles.
View Article and Find Full Text PDF

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation.

View Article and Find Full Text PDF

Neurons throughout the brain modulate their firing rate lawfully in response to changes in sensory input. Theories of neural computation posit that these modulations reflect the outcome of a constrained optimization: neurons aim to efficiently and robustly represent sensory information under resource limitations. Our understanding of how this optimization varies across the brain, however, is still in its infancy.

View Article and Find Full Text PDF

A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation.

View Article and Find Full Text PDF

Smooth eye movements are common during natural viewing; we frequently rotate our eyes to track moving objects or to maintain fixation on an object during self-movement. Reliable information about smooth eye movements is crucial to various neural computations, such as estimating heading from optic flow or judging depth from motion parallax. While it is well established that extraretinal signals (e.

View Article and Find Full Text PDF

An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion.

View Article and Find Full Text PDF

Detection of objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood.

View Article and Find Full Text PDF

There are two distinct sources of retinal image motion: objects moving in the world and observer movement. When the eyes move to track a target of interest, the retinal velocity of some object in the scene will depend on both eye velocity and that object's motion in the world. Thus, to compute the object's velocity relative to the head, a coordinate transformation must be performed by vectorially adding eye velocity and retinal velocity.

View Article and Find Full Text PDF

Multisensory plasticity enables our senses to dynamically adapt to each other and the external environment, a fundamental operation that our brain performs continuously. We searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) in 2 male rhesus macaques using a paradigm of supervised calibration. We report little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd.

View Article and Find Full Text PDF

Perceptual decision-making is increasingly being understood to involve an interaction between bottom-up sensory-driven signals and top-down choice-driven signals, but how these signals interact to mediate perception is not well understood. The parieto-insular vestibular cortex (PIVC) is an area with prominent vestibular responsiveness, and previous work has shown that inactivating PIVC impairs vestibular heading judgments. To investigate the nature of PIVC's contribution to heading perception, we recorded extracellularly from PIVC neurons in two male rhesus macaques during a heading discrimination task, and compared findings with data from previous studies of dorsal medial superior temporal (MSTd) and ventral intraparietal (VIP) areas using identical stimuli.

View Article and Find Full Text PDF

When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons.

View Article and Find Full Text PDF

During self-motion, an independently moving object generates retinal motion that is the vector sum of its world-relative motion and the optic flow caused by the observer's self-motion. A hypothesized mechanism for the computation of an object's world-relative motion is flow parsing, in which the optic flow field due to self-motion is globally subtracted from the retinal flow field. This subtraction generates a bias in perceived object direction (in retinal coordinates) away from the optic flow vector at the object's location.

View Article and Find Full Text PDF

Neurophysiological studies of multisensory processing have largely focused on how the brain integrates information from different sensory modalities to form a coherent percept. However, in the natural environment, an important extra step is needed: the brain faces the problem of , which involves determining whether different sources of sensory information arise from the same environmental cause, such that integrating them is advantageous Behavioral and computational studies have provided a strong foundation for studying causal inference, but studies of its neural basis have only recently been undertaken. This review focuses on recent advances regarding how the brain infers the causes of sensory inputs and uses this information to make robust perceptual estimates.

View Article and Find Full Text PDF

Neurons represent spatial information in diverse reference frames, but it remains unclear whether neural reference frames change with task demands and whether these changes can account for behavior. In this study, we examined how neurons represent the direction of a moving object during self-motion, while monkeys switched, from trial to trial, between reporting object direction in head- and world-centered reference frames. Self-motion information is needed to compute object motion in world coordinates but should be ignored when judging object motion in head coordinates.

View Article and Find Full Text PDF

To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task.

View Article and Find Full Text PDF

Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset.

View Article and Find Full Text PDF

Identifying the features of population responses that are relevant to the amount of information encoded by neuronal populations is a crucial step toward understanding population coding. Statistical features, such as tuning properties, individual and shared response variability, and global activity modulations, could all affect the amount of information encoded and modulate behavioral performance. We show that two features in particular affect information: the modulation of population responses across conditions (population signal) and the inverse population covariability along the modulation axis (projected precision).

View Article and Find Full Text PDF

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework.

View Article and Find Full Text PDF

Creating three-dimensional (3D) representations of the world from two-dimensional retinal images is fundamental to visually guided behaviors including reaching and grasping. A critical component of this process is determining the 3D orientation of objects. Previous studies have shown that neurons in the caudal intraparietal area (CIP) of the macaque monkey represent 3D planar surface orientation (i.

View Article and Find Full Text PDF

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals.

View Article and Find Full Text PDF

We examined the responses of neurons in posterior parietal area 7a to passive rotational and translational self-motion stimuli, while systematically varying the speed of visually simulated (optic flow cues) or actual (vestibular cues) self-motion. Contrary to a general belief that responses in area 7a are predominantly visual, we found evidence for a vestibular dominance in self-motion processing. Only a small fraction of neurons showed multisensory convergence of visual/vestibular and linear/angular self-motion cues.

View Article and Find Full Text PDF