The concerns raised by Henderson, Hayes, Peacock, and Rehrig (2021) are based on misconceptions of our work. We show that Meaning Maps (MMs) do not predict gaze guidance better than a state-of-the-art saliency model that is based on semantically-neutral, high-level features. We argue that there is therefore no evidence to date that MMs index anything beyond these features. Furthermore, we show that although alterations in meaning cause changes in gaze guidance, MMs fail to capture these alterations. We agree that semantic information is important in the guidance of eye-movements, but the contribution of MMs for understanding its role remains elusive.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.cognition.2021.104741DOI Listing

Publication Analysis

Top Keywords

gaze guidance
12
meaning maps
8
henderson hayes
8
hayes peacock
8
peacock rehrig
8
rehrig 2021
8
evidence meaning
4
maps capture
4
capture semantic
4
semantic relevant
4

Similar Publications

Cockpit automation has brought significant benefits in terms of mental workload and fatigue. However, the way primary flight instruments are monitored by pilots may be negatively affected by the high confidence in systems. We examined the effects of automation level on mental workload, manual flight performance and visual strategies.

View Article and Find Full Text PDF

Understanding how early scene viewing is guided can reveal fundamental brain mechanisms for quickly making sense of our surroundings. Viewing is often initiated from the left side. Across two experiments, we focused on search initiation for lateralised targets within real-world scenes, investigating the role of the cerebral hemispheres in guiding the first saccade.

View Article and Find Full Text PDF

Natural eye movements have primarily been studied for over-learned activities such as tea-making, sandwich-making, and hand-washing, which have a fixed sequence of associated actions. These studies demonstrate a sequential activation of low-level cognitive schemas facilitating task completion. However, whether these action schemas are activated in the same pattern when a task is novel and a sequence of actions must be planned in the moment is unclear.

View Article and Find Full Text PDF

Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings.

View Article and Find Full Text PDF

The concept of an intelligent augmented reality (AR) assistant has significant, wide-ranging applications, with potential uses in medicine, military, and mechanics domains. Such an assistant must be able to perceive the environment and actions, reason about the environment state in relation to a given task, and seamlessly interact with the task performer. These interactions typically involve an AR headset equipped with sensors which capture video, audio, and haptic feedback.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!