Publications by authors named "Sanger T"

In this paper, optimal unsupervised motor learning is defined to be a technique for finding the coordinate system of minimum dimensionality which can adequately describe a particular motor task. An explicit method is provided for learning a stable controller that translates commands within the new coordinate system into motor variables appropriate for plant control. The method makes use of previously described neural network algorithms including the generalized Hebbian algorithm, basis-function trees, and trajectory extension learning.

View Article and Find Full Text PDF

A planar 17 muscle model of the monkey's arm based on realistic biomechanical measurements was simulated on a Symbolics Lisp Machine. The simulator implements the equilibrium point hypothesis for the control of arm movements. Given initial and final desired positions, it generates a minimum-jerk desired trajectory of the hand and uses the backdriving algorithm to determine an appropriate sequence of motor commands to the muscles (Flash 1987; Mussa-Ivaldi et al.

View Article and Find Full Text PDF

I describe a new algorithm for approximating continuous functions in high-dimensional input spaces. The algorithm builds a tree-structured network of variable size, which is determined both by the distribution of the input data and by the function to be approximated. Unlike other tree-structured algorithms, learning occurs through completely local mechanisms and the weights and structure are modified incrementally as data arrives.

View Article and Find Full Text PDF

Nonlinear function approximation is often solved by finding a set of coefficients for a finite number of fixed nonlinear basis functions. However, if the input data are drawn from a high-dimensional space, the number of required basis functions grows exponentially with dimension, leading many to suggest the use of adaptive nonlinear basis functions whose parameters can be determined by iterative methods. The author proposes a technique based on the idea that for most of the data, only a few dimensions of the input may be necessary to compute the desired output function.

View Article and Find Full Text PDF