Animals of several species, including primates, learn the statistical regularities of their environment. In particular, they learn the temporal regularities that occur in streams of visual images. Previous human neuroimaging studies reported discrepant effects of such statistical learning, ranging from stronger occipito-temporal activations for sequences in which image order was fixed, compared with sequences of randomly ordered images, to weaker activations for fixed-order sequences compared with sequences that violated the learned order.
View Article and Find Full Text PDFVisual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision.
View Article and Find Full Text PDFAnnu Int Conf IEEE Eng Med Biol Soc
September 2015
In this study, we used a simulation of upcoming low-resolution visual neuroprostheses to evaluate the benefit of embedded computer vision techniques in a wayfinding task. We showed that augmenting the classical phosphene rendering with the basic structure of the environment - displaying the ground plane with a different level of brightness - increased both wayfinding performance and cognitive mapping. In spite of the low resolution of current and upcoming visual implants, the improvement of these cognitive functions may already be possible with embedded artificial vision algorithms.
View Article and Find Full Text PDF