A note on error bounds for approximation in inner product spaces (original) (raw)
Related papers
Best Simultaneous Approximation of Finite Set in Inner Product Space
Advances in Pure Mathematics, 2013
In this paper, we find a way to give best simultaneous approximation of n arbitrary points in convex sets. First , we introduce a special hyperplane which is based on those n points. Then by using this hyperplane, we define best approximation of each point and achieve our purpose .
Approximation Theory and Neural Networks
The main purpose of "J.Computational Analysis and Applications" is to publish high quality research articles from all subareas of Computational Mathematical Analysis and its many potential applications and connections to other areas of Mathematical Sciences. Any paper whose approach and proofs are computational,using methods from Mathematical Analysis in the broadest sense is suitable and welcome for consideration in our journal, except from Applied Numerical Analysis articles. Also plain word articles without formulas and proofs are excluded. The list of possibly connected mathematical areas with this publication includes, but is not restricted to:
Networks and the best approximation property
Biological Cybernetics, 1990
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions . We prove that networks derived from regularization theory and including Radial Basis Functions , have a similar property. From the point of view of approximation theory, however, the property of approximating continuous functions arbitrarily well is not su cient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.
Estimating the Approximation Error in Learning Theory
Analysis and Applications, 2003
Let B be a Banach space and (ℋ,‖·‖ℋ) be a dense, imbedded subspace. For a ∈ B, its distance to the ball of ℋ with radius R (denoted as I(a, R)) tends to zero when R tends to infinity. We are interested in the rate of this convergence. This approximation problem arose from the study of learning theory, where B is the L2 space and ℋ is a reproducing kernel Hilbert space. The class of elements having I(a, R) = O(R-r) with r > 0 is an interpolation space of the couple (B, ℋ). The rate of convergence can often be realized by linear operators. In particular, this is the case when ℋ is the range of a compact, symmetric, and strictly positive definite linear operator on a separable Hilbert space B. For the kernel approximation studied in Learning Theory, the rate depends on the regularity of the kernel function. This yields error estimates for the approximation by reproducing kernel Hilbert spaces. When the kernel is smooth, the convergence is slow and a logarithmic convergence rate is p...
Continuity of Approximation by Neural Networks in Lp Spaces
Annals of Operations Research, 2001
Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=Lp(O) (1?f-M?+G for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the Lp-norm.
Bounding the Errors in Constructive Function Approximation
In this paper we study the theoretical limits of nite constructive convex approximations of a given function in a Hilbert space using elements taken from a reduced subset. We also investigate the trade-o between the global error and the partial error during the iterations of the solution. The results obtained constitute a re nement o f w ell established convergence analysis for constructive iterative sequences in Hilbert spaces with applications in projection pursuit regression and neural network training. G? What is the minimumbound for the global error assuming n = 0 for all n?
Replacing points by compacta in neural network approximation
Journal of the Franklin Institute, 2004
It is shown that cartesian product and pointwise-sum with a fixed compact set preserve various approximation-theoretic properties. Results for pointwise-sum are proved for F -spaces and so hold for any normed linear space, while the other results hold in general metric spaces. Applications are given to approximation of L p -functions on the d-dimensional cube, 1 ≤ p < ∞, by linear combinations of halfspace characteristic functions; i.e., by Heaviside perceptron networks.