Continuity of Approximation by Neural Networks in Lp Spaces (original) (raw)
Related papers
Networks and the best approximation property
Biological Cybernetics, 1990
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions . We prove that networks derived from regularization theory and including Radial Basis Functions , have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not su cient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.
Nearly Exponential Neural Networks Approximation in Lp Spaces
JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences
In different applications, we can widely use the neural network approximation. They are being applied to solve many problems in computer science, engineering, physics, etc. The reason for successful application of neural network approximation is the neural network ability to approximate arbitrary function. In the last 30 years, many papers have been published showing that we can approximate any continuous function defined on a compact subset of the Euclidean spaces of dimensions greater than 1, uniformly using a neural network with one hidden layer. Here we prove that any real function in L_P (C) defined on a compact and convex subset of can be approximated by a sigmoidal neural network with one hidden layer, that we call nearly exponential approximation.
Approximation Theory and Neural Networks
The main purpose of "J.Computational Analysis and Applications" is to publish high quality research articles from all subareas of Computational Mathematical Analysis and its many potential applications and connections to other areas of Mathematical Sciences. Any paper whose approach and proofs are computational,using methods from Mathematical Analysis in the broadest sense is suitable and welcome for consideration in our journal, except from Applied Numerical Analysis articles. Also plain word articles without formulas and proofs are excluded. The list of possibly connected mathematical areas with this publication includes, but is not restricted to:
Neural networks, approximation theory, and finite precision computation
Neural Networks, 1995
The concepts of universal approximation and best approximation are discussed in relation to artificial neural networks explicitly addressing the fact that networks are typically simulated on computers. In relation to the property of universal approximation, it is shown that networks can be considered as producing a polynomial approximation to the training data, and only a finite number of coefficients of this polynomial can be manipulated. Thus, they are not capable of universal approximation. In relation to the property of best approximation, a short discussion shows that possession, or not, of this property is irrelevant when comparing the approximation abilities of different networks. In summary, existence proofs derived from approximation theory prove to be irrelevant when the numerical limitations imposed by computer simulation are taken into account.
Approximation of Continuous Functions by Artificial Neural Networks
2019
Ji, Zongliang ADVISOR: George Todd An artificial neural network is a biologically-inspired system that can be trained to perform computations. Recently, techniques from machine learning have trained neural networks to perform a variety of tasks. It can be shown that any continuous function can be approximated by an artificial neural network with arbitrary precision. This is known as the universal approximation theorem. In this thesis, we will introduce neural networks and one of the first versions of this theorem, due to Cybenko. He modeled artificial neural networks using sigmoidal functions and used tools from measure theory and functional analysis.
approximation by Heaviside perceptron networks, Neural Networks 13
2000
In Lp-spaces with p ∈ [1, ∞) there exists a best approximation mapping to the set of functions computable by Heaviside perceptron networks with n hidden units; however for p ∈ (1, ∞) such best approximation is not unique and cannot be continuous. Keywords. One-hidden-layer networks, Heaviside perceptrons, best approximation, metric projection, continuous selection, approximatively compact. 1
On the approximate realization of continuous mappings by neural networks
Neural networks, 1989
In this paper, we prove that any continuous mapping can be approximately realized by Rumelhart-Hinton-Williams' multilayer neural networks with at least one hidden layer whose output functions are sigmoid functions. The starting point of the proof for the one hidden layer case is an integral formula recently proposed by Irie-Miyake and from this, the general case (for any number of hidden layers) can be proved by induction. The two hidden layers case is proved also by using the Kolmogorov-Arnold-Sprecher theorem and this proof also gives non-trivial realizations.
Best neural simultaneous approximation
Indonesian Journal of Electrical Engineering and Computer Science
For many years, approximation concepts has been investigated in view of neural networks for the several applications of the two topics. Researchers studied simultaneous approximation in the 2-normed space and proved essential theorems concern with existence, uniqueness and degree of best approximation. Here, we define a new 2-norm in -space, with so we call it quasi 2- normed space ( ). The set of approximations is a space of feedforward neural networks that is constructed in this paper. Existence and uniqueness of best neural approximation for a function from is proved, describing the rate of best approximation in terms of modulus of smoothness.
Replacing points by compacta in neural network approximation
Journal of the Franklin Institute, 2004
It is shown that cartesian product and pointwise-sum with a fixed compact set preserve various approximation-theoretic properties. Results for pointwise-sum are proved for F -spaces and so hold for any normed linear space, while the other results hold in general metric spaces. Applications are given to approximation of L p -functions on the d-dimensional cube, 1 ≤ p < ∞, by linear combinations of halfspace characteristic functions; i.e., by Heaviside perceptron networks.
Applied Artificial Higher Order Neural Networks for Control and Recognition
One of the most important problems in the theory of approximation functions by means of neural networks is universal approximation capability of neural networks. In this study, we investigate the theoretical analyses of the universal approximation capability of a special class of three layer feedforward higher order neural networks based on the concept of approximate identity in the space of continuous multivariate functions. Moreover, we present theoretical analyses of the universal approximation capability of the networks in the spaces of Lebesgue integrable multivariate functions. The methods used in proving our results are based on the concepts of convolution and epsilon-net. The obtained results can be seen as an attempt towards the development of approximation theory by means of neural networks.