Weber's law states that estimation noise is proportional to stimulus intensity. Although this holds in perception, it appears absent in visually guided actions where response variability does not scale with object size. This discrepancy is often attributed to dissociated visual processing for perception and action.
View Article and Find Full Text PDFModern virtual reality (VR) devices record six-degree-of-freedom kinematic data with high spatial and temporal resolution and display high-resolution stereoscopic three-dimensional graphics. These capabilities make VR a powerful tool for many types of behavioural research, including studies of sensorimotor, perceptual and cognitive functions. Here we introduce Ouvrai, an open-source solution that facilitates the design and execution of remote VR studies, capitalizing on the surge in VR headset ownership.
View Article and Find Full Text PDFBayesian inference theories have been extensively used to model how the brain derives three-dimensional (3D) information from ambiguous visual input. In particular, the maximum likelihood estimation (MLE) model combines estimates from multiple depth cues according to their relative reliability to produce the most probable 3D interpretation. Here, we tested an alternative theory of cue integration, termed the intrinsic constraint (IC) theory, which postulates that the visual system derives the most stable, not most probable, interpretation of the visual input amid variations in viewing conditions.
View Article and Find Full Text PDFNearly all tasks of daily life involve skilled object manipulation, and successful manipulation requires knowledge of object dynamics. We recently developed a motor learning paradigm that reveals the categorical organization of motor memories of object dynamics. When participants repeatedly lift a constant-density "family" of cylindrical objects that vary in size, and then an outlier object with a greater density is interleaved into the sequence of lifts, they often fail to learn the weight of the outlier, persistently treating it as a family member despite repeated errors.
View Article and Find Full Text PDFWeight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight.
View Article and Find Full Text PDFThe ability to predict the dynamics of objects, linking applied force to motion, underlies our capacity to perform many of the tasks we carry out on a daily basis. Thus, a fundamental question is how the dynamics of the myriad objects we interact with are organized in memory. Using a custom-built three-dimensional robotic interface that allowed us to simulate objects of varying appearance and weight, we examined how participants learned the weights of sets of objects that they repeatedly lifted.
View Article and Find Full Text PDFBecause the motions of everyday objects obey Newtonian mechanics, perhaps these laws or approximations thereof are internalized by the brain to facilitate motion perception. Shepard's seminal investigations of this hypothesis demonstrated that the visual system fills in missing information in a manner consistent with kinematic constraints. Here, we show that perception relies on internalized regularities not only when filling in missing information but also when available motion information is inconsistent with the expected outcome of a physical event.
View Article and Find Full Text PDFWhen a grasped object is larger or smaller than expected, haptic feedback automatically recalibrates motor planning. Intriguingly, haptic feedback can also affect 3D shape perception through a process called depth cue reweighting. Although signatures of cue reweighting also appear in motor behavior, it is unclear whether this motor reweighting is the result of upstream perceptual reweighting, or a separate process.
View Article and Find Full Text PDFVisually guided movements can show surprising accuracy even when the perceived three-dimensional (3D) shape of the target is distorted. One explanation of this paradox is that an evolutionarily specialized "vision-for-action" system provides accurate shape estimates by relying selectively on stereo information and ignoring less reliable sources of shape information like texture and shading. However, the key support for this hypothesis has come from studies that analyze average behavior across many visuomotor interactions where available sensory feedback reinforces stereo information.
View Article and Find Full Text PDFDepth cue reweighting is a feedback-driven learning process that modifies the relative influences of different sources of three-dimensional shape information in perceptual judgments and or motor planning. In this study, we investigated the mechanism supporting reweighting of stereo and texture information by manipulating the haptic feedback obtained during a series of grasping movements. At the end of each grasp, the fingers closed down on a physical object that was consistent with one of the two cues, depending on the condition.
View Article and Find Full Text PDFNeuropsychologia
August 2018
An influential idea in cognitive neuroscience is that perception and action are highly separable brain functions, implemented in distinct neural systems. In particular, this theory predicts that the functional distinction between grasping, a skilled action, and manual estimation, a type of perceptual report, should be mirrored by a split between their respective control systems. This idea has received support from a variety of dissociations, yet many of these findings have been criticized for failing to pinpoint the source of the dissociation.
View Article and Find Full Text PDFThe visual processes that support grasp planning are often studied by analyzing averaged kinematics of repeated movements, as in the literature on grasping and visual illusions. However, by recalibrating visuomotor mappings, the sensorimotor system can adjust motor outputs without changing visual processing, which complicates the interpretation of averaged behavior. We developed a dynamic model of grasp planning and adaptation that can explain why some studies find decrements in illusion effects on grasping while others do not.
View Article and Find Full Text PDFDo illusory distortions of perceived object size influence how wide the hand is opened during a grasping movement? Many studies on this question have reported illusion-resistant grasping, but this finding has been contradicted by other studies showing that grasping movements and perceptual judgments are equally susceptible. One largely unexplored explanation for these contradictions is that illusion effects on grasping can be reduced with repeated movements. Using a visuomotor adaptation paradigm, we investigated whether an adaptation model could predict the time course of Ponzo illusion effects on grasping.
View Article and Find Full Text PDFRecent results have shown that effects of pictorial illusions in grasping may decrease over the course of an experiment. This can be explained as an effect of sensorimotor learning if we consider a pictorial size illusion as simply a perturbation of visually perceived size. However, some studies have reported very constant illusion effects over trials.
View Article and Find Full Text PDF