Publications by authors named "Michael Milford"

The use of UAVs for remote sensing is increasing. In this paper, we demonstrate a method for evaluating and selecting suitable hardware to be used for deployment of algorithms for UAV-based remote sensing under considerations of , , , and constraints. These constraints hinder the deployment of rapidly evolving computer vision and robotics algorithms on UAVs, because they require intricate knowledge about the system and architecture to allow for effective implementation.

View Article and Find Full Text PDF

We reveal how implementing the homogeneous, multi-scale mapping frameworks observed in the mammalian brain's mapping systems radically improves the performance of a range of current robotic localization techniques. Roboticists have developed a range of predominantly single- or dual-scale heterogeneous mapping approaches (typically locally metric and globally topological) that starkly contrast with neural encoding of space in mammalian brains: a multi-scale map underpinned by spatially responsive cells like the grid cells found in the rodent entorhinal cortex. Yet the full benefits of a homogeneous multi-scale mapping framework remain unknown in both robotics and biology: in robotics because of the focus on single- or two-scale systems and limits in the scalability and open-field nature of current test environments and benchmark datasets; in biology because of technical limitations when recording from rodents during movement over large areas.

View Article and Find Full Text PDF

Roboticists have long drawn inspiration from nature to develop navigation and simultaneous localization and mapping (SLAM) systems such as RatSLAM. Animals such as birds and bats possess superlative navigation capabilities, robustly navigating over large, three-dimensional environments, leveraging an internal neural representation of space combined with external sensory cues and self-motion cues. This paper presents a novel neuro-inspired 4DoF (degrees of freedom) SLAM system named NeuroSLAM, based upon computational models of 3D grid cells and multilayered head direction cells, integrated with a vision system that provides external visual cues and self-motion cues.

View Article and Find Full Text PDF

This study develops an approach to automating the process of vegetation cover estimates using computer vision and pattern recognition algorithms. Visual cover estimation is a key tool for many ecological studies, yet quadrat-based analyses are known to suffer from issues of consistency between people as well as across sites (spatially) and time (temporally). Previous efforts to estimate cover from photograps require considerable manual work.

View Article and Find Full Text PDF

Most robot navigation systems perform place recognition using a single-sensor modality and one, or at most two heterogeneous map scales. In contrast, mammals perform navigation by combining sensing from a wide variety of modalities including vision, auditory, olfactory and tactile senses with a multi-scale, homogeneous neural map of the environment. In this paper, we develop a multi-scale, multi-sensor system for mapping and place recognition that combines spatial localization hypotheses at different spatial scales from multiple different sensors to calculate an overall place recognition estimate.

View Article and Find Full Text PDF

Complex brains evolved in order to comprehend and interact with complex environments in the real world. Despite significant progress in our understanding of perceptual representations in the brain, our understanding of how the brain carries out higher level processing remains largely superficial. This disconnect is understandable, since the direct mapping of sensory inputs to perceptual states is readily observed, while mappings between (unknown) stages of processing and intermediate neural states is not.

View Article and Find Full Text PDF

Robotic mapping and localization systems typically operate at either one fixed spatial scale, or over two, combining a local metric map and a global topological map. In contrast, recent high profile discoveries in neuroscience have indicated that animals such as rodents navigate the world using multiple parallel maps, with each map encoding the world at a specific spatial scale. While a number of theoretical-only investigations have hypothesized several possible benefits of such a multi-scale mapping system, no one has comprehensively investigated the potential mapping and place recognition performance benefits for navigating robots in large real world environments, especially using more than two homogeneous map scales.

View Article and Find Full Text PDF

Mobile robots and animals alike must effectively navigate their environments in order to achieve their goals. For animals goal-directed navigation facilitates finding food, seeking shelter or migration; similarly robots perform goal-directed navigation to find a charging station, get out of the rain or guide a person to a destination. This similarity in tasks extends to the environment as well; increasingly, mobile robots are operating in the same underwater, ground and aerial environments that animals do.

View Article and Find Full Text PDF

We have developed a Hierarchical Look-Ahead Trajectory Model (HiLAM) that incorporates the firing pattern of medial entorhinal grid cells in a planning circuit that includes interactions with hippocampus and prefrontal cortex. We show the model's flexibility in representing large real world environments using odometry information obtained from challenging video sequences. We acquire the visual data from a camera mounted on a small tele-operated vehicle.

View Article and Find Full Text PDF

Spatial navigation requires the processing of complex, disparate and often ambiguous sensory data. The neurocomputations underpinning this vital ability remain poorly understood. Controversy remains as to whether multimodal sensory information must be combined into a unified representation, consistent with Tolman's "cognitive map", or whether differential activation of independent navigation modules suffice to explain observed navigation behaviour.

View Article and Find Full Text PDF

The head direction (HD) system in mammals contains neurons that fire to represent the direction the animal is facing in its environment. The ability of these cells to reliably track head direction even after the removal of external sensory cues implies that the HD system is calibrated to function effectively using just internal (proprioceptive and vestibular) inputs. Rat pups and other infant mammals display stereotypical warm-up movements prior to locomotion in novel environments, and similar warm-up movements are seen in adult mammals with certain brain lesion-induced motor impairments.

View Article and Find Full Text PDF

To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty.

View Article and Find Full Text PDF

The CA3 region of the hippocampus has long been proposed as an autoassociative network performing pattern completion on known inputs. The dentate gyrus (DG) region is often proposed as a network performing the complementary function of pattern separation. Neural models of pattern completion and separation generally designate explicit learning phases to encode new information and assume an ideal fixed threshold at which to stop learning new patterns and begin recalling known patterns.

View Article and Find Full Text PDF