Solving Nonlinearly Separable Classifications in a Single-Layer Neural Network.

Neural Comput

Department of Psychology, Binghamton University, Binghamton, NY 13903, U.S.A.

Published: March 2017

Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features.

Download full-text PDF

Source
http://dx.doi.org/10.1162/NECO_a_00931DOI Listing

Publication Analysis

Top Keywords

nonlinearly separable
16
separable classifications
12
solving nonlinearly
8
single-layer neural
8
separable
4
classifications single-layer
4
neural network
4
network work
4
work minsky
4
minsky papert
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!