The visual encoding of tool-object affordances.

Neuroscience

Cognitive Motor Control Laboratory, School of Applied Physiology, College of Sciences, Georgia Institute of Technology, Atlanta 30332, USA. Electronic address:

Published: December 2015

The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention towards the manipulative grasp-posture may serve to encode grasp-intent. Results here shed new light on how an observer gathers action-information when evaluating static tool-object scenes and reveal how contextual and grasp-specific affordances directly modulate visuospatial attention.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neuroscience.2015.09.060DOI Listing

Publication Analysis

Top Keywords

operant tool-end
12
tool-object
10
tool-object affordances
8
tool-object pairs
8
functional grasp-posture
8
grasp-posture grasp
8
graspable tool-end
8
gaze scanpaths
8
tool-object context
8
engages object
8

Similar Publications

The visual encoding of tool-object affordances.

Neuroscience

December 2015

Cognitive Motor Control Laboratory, School of Applied Physiology, College of Sciences, Georgia Institute of Technology, Atlanta 30332, USA. Electronic address:

The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across three contexts: correct (e.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!