Retinal models are needed to simulate the translation of visual percepts to Retinal Ganglion Cells (RGCs) neural spike trains, through which visual information is transmitted to the brain. Restoring vision through neural prostheses motivates the development of accurate retinal models. We integrate biologically-inspired image features to RGC models. We trained Linear-Nonlinear models using response data from biological retinae. We show that augmenting raw image input with retina-inspired image features leads to performance improvements: in a smaller (30sec. of retina recordings) set integration of features leads to improved models in approximately $\frac{2}{3}$ of the modeled RGCS; in a larger (4min. recording) we show that utilizing Spike Triggered Average analysis to localize RGCs in input images and extract features in a cell-based manner leads to improved models in all (except two) of the modeled RGCs.

Download full-text PDF

Source
http://dx.doi.org/10.1109/EMBC46164.2021.9629869DOI Listing

Publication Analysis

Top Keywords

image features
12
biologically-inspired image
8
retinal models
8
features leads
8
leads improved
8
improved models
8
modeled rgcs
8
models
6
features
5
features model
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!