Parallel growing and training of neural networks using output parallelism.

IEEE Trans Neural Netw

Dept. of Electr. and Comput. Eng., Nat. Univ. of Singapore.

Published: October 2012

In order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of subproblems. In this paper, we propose a simple neural-network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several subproblems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one subproblem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem's output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNN.2002.1000123DOI Listing

Publication Analysis

Top Keywords

output parallelism
12
original problem
8
fraction output
8
output vector
8
output
6
method
6
parallel growing
4
growing training
4
training neural
4
neural networks
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!