Distinguishing Doors and Floors on All Fours: Landmarks as Tools for Vertical Navigation Learning in Domestic Dogs ().

Animals (Basel)

Dog Cognition Lab, Department of Psychology, Barnard College, New York, NY 10027, USA.

Published: November 2024

Spatial navigation allows animals to understand their environment position and is crucial to survival. An animal's primary mode of spatial navigation (horizontal or vertical) is dependent on how they naturally move in space. Observations of the domestic dog () have shown that they, like other terrestrial animals, navigate poorly in vertical space. This deficit is visible in their use of multi-story buildings. To date, no research has been conducted to determine if dogs can learn how to navigate in an anthropogenic vertical environment with the help of a landmark. As such, we herein investigate the effect of the addition of a visual or olfactory landmark on dogs' ability to identify when they are on their home floor. Subject behaviors toward their home door and a contrasting floor door were compared before and after exposure to a landmark outside of their home door. While subjects initially showed no difference in latency to approach an apartment door on their home or a wrong floor, we found a significant difference in latency to approach the doors in the test trials for subjects who approached the doors in every trial. Other findings are equivocal, but this result is consistent with the hypothesis that dogs can learn to navigate in vertical space.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11591085PMC
http://dx.doi.org/10.3390/ani14223316DOI Listing

Publication Analysis

Top Keywords

spatial navigation
8
navigate vertical
8
vertical space
8
dogs learn
8
learn navigate
8
difference latency
8
latency approach
8
vertical
5
distinguishing doors
4
doors floors
4

Similar Publications

Historically, electrophysiological correlates of scene processing have been studied with experiments using static stimuli presented for discrete timescales where participants maintain a fixed eye position. Gaps remain in generalizing these findings to real-world conditions where eye movements are made to select new visual information and where the environment remains stable but changes with our position and orientation in space, driving dynamic visual stimulation. Co-recording of eye movements and electroencephalography (EEG) is an approach to leverage fixations as time-locking events in the EEG recording under free-viewing conditions to create fixation-related potentials (FRPs), providing a neural snapshot in which to study visual processing under naturalistic conditions.

View Article and Find Full Text PDF

Expert navigators deploy rational complexity-based decision precaching for large-scale real-world planning.

Proc Natl Acad Sci U S A

January 2025

Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, United Kingdom.

Efficient planning is a distinctive hallmark of intelligence in humans, who routinely make rapid inferences over complex world contexts. However, studies investigating how humans accomplish this tend to focus on naive participants engaged in simplistic tasks with small state spaces, which do not reflect the intricacy, ecological validity, and human specialization in real-world planning. In this study, we examine the street-by-street route planning of London taxi drivers navigating across more than 26,000 streets in London (United Kingdom).

View Article and Find Full Text PDF

Processing pathways between sensory and default mode network (DMN) regions support recognition, navigation, and memory but their organisation is not well understood. We show that functional subdivisions of visual cortex and DMN sit at opposing ends of parallel streams of information processing that support visually mediated semantic and spatial cognition, providing convergent evidence from univariate and multivariate task responses, intrinsic functional and structural connectivity. Participants learned virtual environments consisting of buildings populated with objects, drawn from either a single semantic category or multiple categories.

View Article and Find Full Text PDF

Three-dimensional (3D) LiDAR is crucial for the autonomous navigation of orchard mobile robots, offering comprehensive and accurate environmental perception. However, the increased richness of information provided by 3D LiDAR also leads to a higher computational burden for point cloud data processing, posing challenges to real-time navigation. To address these issues, this paper proposes a 3D point cloud optimization method based on the octree data structure for autonomous navigation of orchard mobile robots.

View Article and Find Full Text PDF

Achieving a comprehensive understanding of animal intelligence demands an integrative approach that acknowledges the interplay between an organism's brain, body and environment. Insects, despite their limited computational resources, demonstrate remarkable abilities in navigation. Existing computational models often fall short in faithfully replicating the morphology of real insects and their interactions with the environment, hindering validation and practical application in robotics.

View Article and Find Full Text PDF

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!