The dataset proposed is a collection of pedestrian navigation data sequences combining visual and spatial information. The pedestrian navigation sequences are situations encountered by a pedestrian walking in an urban outdoor environment, such as moving on the sidewalk, navigating through a crowd, or crossing a street when the pedestrian light traffic is green. The acquired data are timestamped provided RGB-D images and are associated with GPS, and inertial data (acceleration, rotation).
View Article and Find Full Text PDFBlindness affects millions of people worldwide, leading to difficulties in daily travel and a loss of independence due to a lack of spatial information. This article proposes a new navigation aid to help people with severe blindness reach their destination. Blind people are guided by a short 3D spatialised sound that indicates the target point to follow.
View Article and Find Full Text PDFIntroduction: Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual.
View Article and Find Full Text PDFComputer vision-based clinical gait analysis is the subject of permanent research. However, there are very few datasets publicly available; hence the comparison of existing methods between each other is not straightforward. Even if the test data are in an open access, existing databases contain very few test subjects and single modality measurements, which limit their usage.
View Article and Find Full Text PDF