This paper addresses the training of network models from data produced by systems with symmetry properties. It is argued that although general networks are global approximators, in practice some properties such as symmetry are very hard to learn from data. In order to guarantee that the final network will be symmetrical, constraints are developed for two types of models, namely, the multilayer perceptron (MLP) network and the radial basis function (RBF) network. In global modeling problems it becomes crucial to impose conditions for symmetry in order to stand a chance of reproducing symmetry-related phenomena. Sufficient conditions are given for MLP and RBF networks to have a set of fixed points that are symmetrical with respect to the origin of the phase space. In the case of MLP networks such conditions reduce to the absence of bias parameters and the requirement of odd activation functions. This turns out to be important from a dynamical point of view since some phenomena are only observed in the context of symmetry, which is not a structurally stable property. The results are illustrated using bench systems that display symmetry, such as the Duffing-Ueda oscillator and the Lorenz system.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1103/PhysRevE.69.026701 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!