Closed-loop decoder adaptation (CLDA) is an emerging paradigm for improving or maintaining the online performance of brain-machine interfaces (BMIs). Here, we present Likelihood Gradient Ascent (LGA), a novel CLDA algorithm for a Kalman filter (KF) decoder that uses stochastic, gradient-based corrections to update KF parameters during closed-loop BMI operation. LGA's gradient-based paradigm presents a variety of potential advantages over other "batch" CLDA methods, including the ability to update decoder parameters on any time-scale, even on every decoder iteration. Using a closed-loop BMI simulator, we compare the LGA algorithm to the Adaptive Kalman Filter (AKF), a partially gradient-based CLDA algorithm that has been previously tested in non-human primate experiments. In contrast to the AKF's separate mean-squared error objective functions, LGA's update rules are derived directly from a single log likelihood objective, making it one step towards a potentially optimal continuously adaptive CLDA algorithm for BMIs.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/EMBC.2013.6610114 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!