Publications by authors named "Jing S Pan"

The inversion effect in biological motion suggests that presenting a point-light display (PLD) in an inverted orientation impairs the observer's ability to perceive the movement, likely due to the observer's unfamiliarity with the dynamic characteristics of inverted motion. Vertical dancers (VDs), accustomed to performing and perceiving others to perform dance movements in an inverted orientation while being suspended in the air, offer a unique perspective on this phenomenon. A previous study showed that VDs were more sensitive to the artificial inversion of PLDs depicting dance movements when compared to typical and non-dancers if given sufficient dynamic information.

View Article and Find Full Text PDF

This study investigates the optical information for visual event perception. Events are objects in motion, with properties like shape, weight and surface material influencing the dynamics that shape movements and optics. The progressive transformation of visible textures, known as visual kinaesthetic information, specifies movements and objects.

View Article and Find Full Text PDF

When observers perceive 3D relations, they represent depth and spatial locations with the ground as a reference. This frame of reference could be egocentric, that is, moving with the observer, or allocentric, that is, remaining stationary and independent of the moving observer. We tested whether the representation of relative depth and of spatial location took an egocentric or allocentric frame of reference in three experiments, using a blind walking task.

View Article and Find Full Text PDF

It is a familiar but challenging task to manually transfer a liquid-filled container without spilling. The action requires stringent control because the dynamics of interacting with the non-rigid aqueous content is complex. In this work, we sought to discover what properties of a liquid-filled container were predictive of transfer without spilling performance.

View Article and Find Full Text PDF

The debate surrounding the advantages of binocular versus monocular vision has persisted for decades. This study aimed to investigate whether individuals with monocular vision loss could accurately and precisely perceive large egocentric distances in real-world environments, under natural viewing conditions, comparable to those with normal vision. A total of 49 participants took part in the study, divided into three groups based on their viewing conditions.

View Article and Find Full Text PDF

Monocular blindness impairs visual depth perception, yet patients seldom report difficulties in targeted actions like reaching, walking, or driving. We hypothesized that by utilizing monocular depth information and calibrating actions with haptic feedback, monocular patients can perceive egocentric distance and perform targeted actions. We compared targeted reaching in monocular patients, monocular-viewing, and binocular-viewing normal controls.

View Article and Find Full Text PDF

Traditional visual search tasks in the laboratories typically involve looking for targets in 2D displays with exemplar views of objects. In real life, visual search commonly entails 3D objects in 3D spaces with nonperpendicular viewing and relative motions between observers and search array items, both of which lead to transformations of objects' projected images in lawful but unpredicted ways. Furthermore, observers often do not have to memorize a target before searching, but may refer to it while searching, for example, holding a picture of someone while looking for them from a crowd.

View Article and Find Full Text PDF

Significance: Using static depth information, normal observers monocularly perceived equidistance with high accuracy. With dynamic depth information and/or monocular viewing experience, they perceived with high precision. Therefore, monocular patients, who were adapted to monocular viewing, should be able to perceive equidistance and perform related tasks.

View Article and Find Full Text PDF

Information used in visual event perception includes both static image structure projected from opaque object surfaces and dynamic optic flow generated by motion. Events presented in static blurry grayscale displays have been shown to be recognized only when and after presented with optic flow. In this study, we investigate the effects of optic flow and color on identifying blurry events by studying the identification accuracy and eye-movement patterns.

View Article and Find Full Text PDF

Purpose: This study identifies and characterizes the nasotemporal hemifield difference of interocular suppression in subjects who have been successfully treated for strabismus.

Methods: Interocular suppression in the nasal and temporal hemifields were measured using two methods, namely, binocular phase combination and dichoptic motion coherence, both entailed suprathreshold stimuli. We tested 29 clinical subjects, who had strabismus (19 with exotropia and 10 with esotropia) but regained good ocular alignment (within 10 prism diopters) after surgical or refractive correction, and 10 control subjects.

View Article and Find Full Text PDF

Throwing is an important motor skill for human survival and societal development. It has been shown that throwers could select throwable balls for themselves and ball throwability was determined by its size and weight. In this study, we investigated whether throwers could perceive ball throwability for other throwers (experimental confederates) and whether the perceived throwability for others also followed a size-weight relation.

View Article and Find Full Text PDF

Perceiving the spatial layout of objects is crucial in visual scene perception. Optic flow provides information about spatial layout. This information is not affected by image blur because motion detection uses low spatial frequencies in image structure.

View Article and Find Full Text PDF

Events consist of objects in motion. When objects move, their opaque surfaces reflect light and produce both static image structure and dynamic optic flow. The static and dynamic optical information co-specify events.

View Article and Find Full Text PDF

Use of motion to break camouflage extends back to the Cambrian [In the Blink of an Eye: How Vision Sparked the Big Bang of Evolution (New York Basic Books, 2003)]. We investigated the ability to break camouflage and continue to see camouflaged targets after motion stops. This is crucial for the survival of hunting predators.

View Article and Find Full Text PDF

Rotating a scene in a frontoparallel plane (rolling) yields a change in orientation of constituent images. When using only information provided by static images to perceive a scene after orientation change, identification performance typically decreases (Rock & Heimer, 1957). However, rolling generates optic flow information that relates the discrete, static images (before and after the change) and forms an embodied memory that aids recognition.

View Article and Find Full Text PDF

Rationale: Clinical studies have shown that patients with exaggerated risk-taking tendencies have high baseline levels of norepinephrine. In this work, we systemically manipulated norepinephrine levels in rats and studied their behavioral changes in a probabilistic discounting task, which is a paradigm for gauging risk taking.

Methods: This study aims to explore the effects of the selective norepinephrine reuptake inhibitor (atomoxetine at doses of 0.

View Article and Find Full Text PDF

Mon-Williams and Bingham (2011) developed an affordance model of the spatial structure of reaches-to-grasp. With a single free parameter (P), the model predicted the safety margins (SMs) exhibited in maximum grasp apertures (MGAs), during the approach of a hand to a target object, as a function of an affordance measure of object size and a functional measure of hand size. An affordance analysis revealed that object size is determined by a diagonal through the object, called the maximum object extent.

View Article and Find Full Text PDF

Purpose: From static blurry images, it is difficult to perceive objects because high spatial frequency details are filtered out. However, in the context of events (defined as objects in motion), motion generates optic flow, which provides a depth map of 3D layout and allows good event perception. Visual motion measurement uses low spatial frequencies that remain available in blurry images, making events perceivable.

View Article and Find Full Text PDF

Bingham and Pagano (1998) argued that calibration is an intrinsic component of perception-action that yields accurate targeted actions. They described calibration as of a mapping from embodied units of perception to embodied units of action. This mapping theory yields a number of predictions.

View Article and Find Full Text PDF

Bingham and Pagano (1998) described calibration as a mapping from embodied perceptual units to an embodied action unit and suggested that it is an inherent component of perception/action that yields accurate targeted actions. We tested two predictions of this "Mapping Theory." First, calibration should transfer between limbs, because it involves a mapping from perceptual units to an action unit, and thus is functionally specific to the action (Pan, Coats, and Bingham, 2014).

View Article and Find Full Text PDF

Visual perception studies typically focus either on optic flow structure or image structure, but not on the combination and interaction of these two sources of information. Each offers unique strengths in contrast to the other's weaknesses. Optic flow yields intrinsically powerful information about 3D structure, but is ephemeral.

View Article and Find Full Text PDF