Learning capability and storage capacity of two-hidden-layer feedforward networks.

IEEE Trans Neural Netw

Sch. of Electr. and Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore.

Published: October 2012

The problem of the necessary complexity of neural networks is of interest in applications. In this paper, learning capability and storage capacity of feedforward neural networks are considered. We markedly improve the recent results by introducing neural-network modularity logically. This paper rigorously proves in a constructive method that two-hidden-layer feedforward networks (TLFNs) with 2/spl radic/(m+2)N (/spl Lt/N) hidden neurons can learn any N distinct samples (x/sub i/, t/sub i/) with any arbitrarily small error, where m is the required number of output neurons. It implies that the required number of hidden neurons needed in feedforward networks can be decreased significantly, comparing with previous results. Conversely, a TLFN with Q hidden neurons can store at least Q/sup 2//4(m+2) any distinct data (x/sub i/, t/sub i/) with any desired precision.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNN.2003.809401DOI Listing

Publication Analysis

Top Keywords

feedforward networks
12
hidden neurons
12
learning capability
8
capability storage
8
storage capacity
8
two-hidden-layer feedforward
8
neural networks
8
x/sub t/sub
8
required number
8
networks
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!