This paper illustrates the use of a powerful language, called J, that is ideal for simulating neural networks. The use of J is demonstrated by its application to a gradient descent method for training a multilayer perceptron. It is also shown how the back-propagation algorithm can be easily generalized to multilayer networks without any increase in complexity and that the algorithm can be completely expressed in an array notation which is directly executable through J. J is a general purpose language, which means that its user is given a flexibility not available in neural network simulators or in software packages such as MATLAB. Yet, because of its numerous operators, J allows a very succinct code to be used, leading to a tremendous decrease in development time.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1142/s0129065799000289 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!