Nearly Exponential Neural Networks Approximation in Lp Spaces (original) (raw)
Related papers
Continuity of Approximation by Neural Networks in Lp Spaces
Annals of Operations Research, 2001
Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=Lp(O) (1?f-M?+G for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.
Approximation of Continuous Functions by Artificial Neural Networks
2019
Ji, Zongliang ADVISOR: George Todd An artificial neural network is a biologically-inspired system that can be trained to perform computations. Recently, techniques from machine learning have trained neural networks to perform a variety of tasks. It can be shown that any continuous function can be approximated by an artificial neural network with arbitrary precision. This is known as the universal approximation theorem. In this thesis, we will introduce neural networks and one of the first versions of this theorem, due to Cybenko. He modeled artificial neural networks using sigmoidal functions and used tools from measure theory and functional analysis.
Approximation results for neural network operators activated by sigmoidal functions
Neural Networks, 2013
In this paper, we study pointwise and uniform convergence, as well as the order of approximation, for a family of linear positive neural network operators activated by certain sigmoidal functions. Only the case of functions of one variable is considered, but it can be expected that our results can be generalized to handle multivariate functions as well. Our approach allows us to extend previously existing results. The order of approximation is studied for functions belonging to suitable Lipschitz classes and using a moment-type approach. The special cases of neural network operators activated by logistic, hyperbolic tangent, and ramp sigmoidal functions are considered. In particular, we show that for C 1 -functions, the order of approximation for our operators with logistic and hyperbolic tangent functions here obtained is higher with respect to that established in some previous papers. The case of quasi-interpolation operators constructed with sigmoidal functions is also considered.
Limitations of the approximation capabilities of neural networks with one hidden layer
Advances in Computational Mathematics, 1996
Let s/> 1 be an integer and W be the class of all functions having integrable partial derivatives on [0, 1] s. We are interested in the minimum number of neurons in a neural network with a single hidden layer required in order to provide a mean approximation order of a preassigned e > 0 to each function in W. We prove that this number cannot be O(e -s log(I/e)) if a spline-like localization is required. This cannot be improved even if one allows different neurons to evaluate different activation functions, even depending upon the target function. Nevertheless, for any 6 > 0, a network with O(c -s-6) neurons can be constructed to provide this order of approximation, with localization. Analogous results are also valid for other L p norms.
Approximation Theory and Neural Networks
The main purpose of "J.Computational Analysis and Applications" is to publish high quality research articles from all subareas of Computational Mathematical Analysis and its many potential applications and connections to other areas of Mathematical Sciences. Any paper whose approach and proofs are computational,using methods from Mathematical Analysis in the broadest sense is suitable and welcome for consideration in our journal, except from Applied Numerical Analysis articles. Also plain word articles without formulas and proofs are excluded. The list of possibly connected mathematical areas with this publication includes, but is not restricted to:
Approximation by series of sigmoidal functions with applications to neural networks
Annali di Matematica Pura ed Applicata, 2013
Your article is protected by copyright and all rights are held exclusively by Fondazione Annali di Matematica Pura ed Applicata and Springer-Verlag Berlin Heidelberg. This eoffprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
Approximation by superpositions of a sigmoidal function
Mathematics of Control, Signals, and Systems, 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set ofaffine functionals can uniformly approximate any continuous function of n real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single bidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.
Neural Computation, 2016
The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of ℝ by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function σ providing approximation to an arbitrary continuous function within any degree of accuracy.
Efficient Approximation of High-Dimensional Functions With Neural Networks
IEEE Transactions on Neural Networks and Learning Systems
In this article, we develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with rectified linear unit-type networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. As special cases of the general results, we obtain different classes of functions that can be approximated with recitifed linear unit networks without the curse of dimensionality.
On the approximation by single hidden layer feedforward neural networks with fixed weights
Neural Networks, 2018
Feedforward neural networks have wide applicability in various disciplines of science due to their universal approximation property. Some authors have shown that single hidden layer feedforward neural networks (SLFNs) with fixed weights still possess the universal approximation property provided that approximated functions are univariate. But this phenomenon does not lay any restrictions on the number of neurons in the hidden layer. The more this number, the more the probability of the considered network to give precise results. In this note, we constructively prove that SLFNs with the fixed weight 1 and two neurons in the hidden layer can approximate any continuous function on a compact subset of the real line. The applicability of this result is demonstrated in various numerical examples. Finally, we show that SLFNs with fixed weights cannot approximate all continuous multivariate functions.