Integral representations of shallow neural network with rectified power unit activation function.

Neural Netw

Johann Radon Institute, Altenberger Straße 69, A-4040 Linz, Austria; Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, A-1090 Vienna, Austria; Research Platform Data Science @ Uni Vienna, Währinger Straße 29/S6, A-1090 Vienna, Austria. Electronic address:

Published: November 2022

In this paper we characterize the set of functions that can be represented by infinite width neural networks with RePU activation function max(0,x), when the network coefficients are regularized by an ℓ (quasi)norm. Compared to the more well-known ReLU activation function (which corresponds to p=1), the RePU activation functions exhibit a greater degree of smoothness which makes them preferable in several applications. Our main result shows that such representations are possible for a given function if and only if the function is κ-order Lipschitz and its R-norm is finite. This extends earlier work on this topic that has been restricted to the case of the ReLU activation function and coefficient bounds with respect to the ℓ norm. Since for q<2, ℓ regularizations are known to promote sparsity, our results also shed light on the ability to obtain sparse neural network representations.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.neunet.2022.09.005DOI Listing

Publication Analysis

Top Keywords

activation function
16
repu activation
8
relu activation
8
function
6
activation
5
integral representations
4
representations shallow
4
shallow neural
4
neural network
4
network rectified
4

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!