Philos Trans R Soc Lond B Biol Sci
June 2016
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance.
View Article and Find Full Text PDFDespite growing evidence for perceptual interactions between motion and position, no unifying framework exists to account for these two key features of our visual experience. We show that percepts of both object position and motion derive from a common object-tracking system--a system that optimally integrates sensory signals with a realistic model of motion dynamics, effectively inferring their generative causes. The object-tracking model provides an excellent fit to both position and motion judgments in simple stimuli.
View Article and Find Full Text PDFAmblyopia is a neuro-developmental disorder of the visual cortex that arises from abnormal visual experience early in life. Amblyopia is clinically important because it is a major cause of vision loss in infants and young children. Amblyopia is also of basic interest because it reflects the neural impairment that occurs when normal visual development is disrupted.
View Article and Find Full Text PDFSelf-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown whether actions can have generative effects on visual perception.
View Article and Find Full Text PDFProc Natl Acad Sci U S A
March 2013
Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball.
View Article and Find Full Text PDFLimits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system.
View Article and Find Full Text PDFWhen reaching for objects, humans make saccades to fixate the object at or near the time the hand begins to move. In order to address whether the CNS relies on a common representation of target positions to plan both saccades and hand movements, we quantified the contributions of visual short-term memory (VSTM) to hand and eye movements executed during the same coordinated actions. Subjects performed a sequential movement task in which they picked up one of two objects on the right side of a virtual display (the "weapon"), moved it to the left side of the display (to a "reloading station") and then moved it back to the right side to hit the other object (the target).
View Article and Find Full Text PDFIt is well-established that some aspects of perception and action can be understood as probabilistic inferences over underlying probability distributions. In some situations, it would be advantageous for the nervous system to sample interpretations from a probability distribution rather than commit to a particular interpretation. In this study, we asked whether visual percepts correspond to samples from the probability distribution over image interpretations, a form of sampling that we refer to as Bayesian sampling.
View Article and Find Full Text PDFPrevious work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements.
View Article and Find Full Text PDFPrevious cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance.
View Article and Find Full Text PDFWe tested whether changing accuracy demands for simple pointing movements leads humans to adjust the feedback control laws that map sensory signals from the moving hand to motor commands. Subjects made repeated pointing movements in a virtual environment to touch a button whose shape varied randomly from trial to trial-between squares, rectangles oriented perpendicular to the movement path, and rectangles oriented parallel to the movement path. Subjects performed the task on a horizontal table but saw the target configuration and a virtual rendering of their pointing finger through a mirror mounted between a monitor and the table.
View Article and Find Full Text PDFHuman behavior in natural tasks consists of an intricately coordinated dance of cognitive, perceptual, and motor activities. Although much research has progressed in understanding the nature of cognitive, perceptual, or motor processing in isolation or in highly constrained settings, few studies have sought to examine how these systems are coordinated in the context of executing complex behavior. Previous research has suggested that, in the course of visually guided reaching movements, the eye and hand are yoked, or linked in a nonadaptive manner.
View Article and Find Full Text PDFNumerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms.
View Article and Find Full Text PDFThe informativeness of sensory cues depends critically on statistical regularities in the environment. However, statistical regularities vary between different object categories and environments. We asked whether and how the brain changes the prior assumptions about scene statistics used to interpret visual depth cues when stimulus statistics change.
View Article and Find Full Text PDFOrientation disparity, the difference in orientation that results when a texture element on a slanted surface is projected to the two eyes, has been proposed as a binocular cue for 3D orientation. Since orientation disparity is confounded with position disparity, neither behavioral nor neurophysiological experiments have successfully isolated its contribution to slant estimates or established whether the visual system uses it. Using a modified disparity energy model, we simulated a population of binocular visual cortical neurons tuned to orientation disparity and measured the amount of Fisher information contained in the activity patterns.
View Article and Find Full Text PDFWe assessed the usefulness of stereopsis across the visual field by quantifying how retinal eccentricity and distance from the horopter affect humans' relative dependence on monocular and binocular cues about 3D orientation. The reliabilities of monocular and binocular cues both decline with eccentricity, but the reliability of binocular information decreases more rapidly. Binocular cue reliability also declines with increasing distance from the horopter, whereas the reliability of monocular cues is virtually unaffected.
View Article and Find Full Text PDFWe investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to a target. In some trials, the target shifted unnoticed while the finger made the first movement.
View Article and Find Full Text PDFVisual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands.
View Article and Find Full Text PDFRecent studies have shown that humans effectively take into account task variance caused by intrinsic motor noise when planning fast hand movements. However, previous evidence suggests that humans have greater difficulty accounting for arbitrary forms of stochasticity in their environment, both in economic decision making and sensorimotor tasks. We hypothesized that humans can learn to optimize movement strategies when environmental randomness can be experienced and thus implicitly learned over several trials, especially if it mimics the kinds of randomness for which subjects might have generative models.
View Article and Find Full Text PDFPeople can be shown to use memorized location information to move their hand to a target location if no visual information is available. However, for several reasons, memorized information may be imprecise and inaccurate. Here, we study whether and to what extent humans use the remembered location of an object to plan reaching movements when the target is visible.
View Article and Find Full Text PDFHow the visual system learns the statistical regularities (e.g., symmetry) needed to interpret pictorial cues to depth is one of the outstanding questions in perceptual science.
View Article and Find Full Text PDFMost research on depth cue integration has focused on stimulus regimes in which stimuli contain the small cue conflicts that one might expect to normally arise from sensory noise. In these regimes, linear models for cue integration provide a good approximation to system performance. This article focuses on situations in which large cue conflicts can naturally occur in stimuli.
View Article and Find Full Text PDFVision provides a number of cues about the three-dimensional (3D) layout of objects in a scene that could be used for planning and controlling goal-directed behaviors such as pointing, grasping, and placing objects. An emerging consensus from the perceptual work is that the visual brain is a near-optimal Bayesian estimator of object properties, for example, by integrating cues in a way that accounts for differences in their reliability. We measured how the visuomotor system integrates binocular and monocular cues to 3D surface orientation to guide the placement of objects on a slanted surface.
View Article and Find Full Text PDF