Conventional artificial neural networks perform functional mappings from their input space to their output space. The synaptic weights encode information about the mapping in a manner analogous to long-term memory in biological systems. This paper presents a method of designing neural networks where recurrent signal loops store this knowledge in a manner analogous to short-term memory. The synaptic weights of these networks encode a learning algorithm. This gives these networks the ability to dynamically learn any functional mapping from a (possibly very large) set, without changing any synaptic weights. These networks are adaptive dynamic systems. Learning is online continually taking place as part of the network's overall behavior instead of a separate, externally driven process. We present four higher order fixed-weight learning networks. Two of these networks have standard backpropagation embedded in their synaptic weights. The other two utilize a more efficient gradient-descent-based learning rule. This new learning scheme was discovered by examining variations in fixed-weight topology. We present empirical tests showing that all these networks were able to successfully learn functions from both discrete (Boolean) and continuous function sets. Largely, the networks were robust with respect to perturbations in the synaptic weights. The exception was the recurrent connections used to store information. These required a tight tolerance of 0.5%. We found that the cost of these networks scaled approximately in proportion to the total number of synapses. We consider evolving fixed weight networks tailored to a specific problem class by analyzing the meta-learning cost surface of the networks presented.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/72.750553 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!