We extend Kolmogorov's Superpositions to approximating arbitrary continuous functions with a noniterative approach that can be used by any neural network that uses these superpositions. Our approximation algorithm uses a modified dimension reducing function that allows for an increased number of summands to achieve an error bound commensurate with that of r iterations for any r. This new variant of Kolmogorov's Superpositions improves upon the original parallelism inherent in them by performing highly distributed parallel computations without synchronization. We note that this approach makes implementation much easier and more efficient on networks of modern parallel hardware, and thus makes it a more practical tool.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2021.07.006 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!