Universal approximation theorem (original) (raw)

Feed-forward neural network with a 1 hidden layer can approximate continuous functions

In the mathematical theory of artificial neural networks, universal approximation theorems are theorems[1][2] of the following form: Given a family of neural networks, for each function f {\displaystyle f} {\displaystyle f} from a certain function space, there exists a sequence of neural networks ϕ 1 , ϕ 2 , … {\displaystyle \phi _{1},\phi _{2},\dots } {\displaystyle \phi _{1},\phi _{2},\dots } from the family, such that ϕ n → f {\displaystyle \phi _{n}\to f} {\displaystyle \phi _{n}\to f} according to some criterion. That is, the family of neural networks is dense in the function space.

The most popular version states that feedforward networks with non-polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces, with respect to the compact convergence topology.

Universal approximation theorems are existence theorems: They simply state that there exists such a sequence ϕ 1 , ϕ 2 , ⋯ → f {\displaystyle \phi _{1},\phi _{2},\dots \to f} {\displaystyle \phi _{1},\phi _{2},\dots \to f}, and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as backpropagation, might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).

Universal approximation theorems are limit theorems: They simply state that for any f {\displaystyle f} {\displaystyle f} and a criteria of closeness ϵ > 0 {\displaystyle \epsilon >0} {\displaystyle \epsilon >0}, if there are enough neurons in a neural network, then there exists a neural network with that many neurons that does approximate f {\displaystyle f} {\displaystyle f} to within ϵ {\displaystyle \epsilon } {\displaystyle \epsilon }. There is no guarantee that any finite size, say, 10000 neurons, is enough.

Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors. The spaces of multivariate functions that can be implemented by a network are determined by the structure of the network, the set of simple functions, and its multiplicative parameters. A great deal of theoretical work has gone into characterizing these function spaces.

Most universal approximation theorems are in one of two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case). In addition to these two classes, there are also universal approximation theorems for neural networks with bounded number of hidden layers and a limited number of neurons in each layer ("bounded depth and bounded width" case).

The first examples were the arbitrary width case. George Cybenko in 1989 proved it for sigmoid activation functions.[3] Kurt Hornik [de], Maxwell Stinchcombe, and Halbert White showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators.[1] Hornik also showed in 1991[4] that it is not the specific choice of the activation function but rather the multilayer feed-forward architecture itself that gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993[5] and later Allan Pinkus in 1999[6] showed that the universal approximation property is equivalent to having a nonpolynomial activation function.

The arbitrary depth case was also studied by a number of authors such as Gustaf Gripenberg in 2003,[7] Dmitry Yarotsky,[8] Zhou Lu et al in 2017,[9] Boris Hanin and Mark Sellke in 2018[10] who focused on neural networks with ReLU activation function. In 2020, Patrick Kidger and Terry Lyons[11] extended those results to neural networks with general activation functions such, e.g. tanh, GeLU, or Swish.

One special case of arbitrary depth is that each composition component comes from a finite set of mappings. In 2024, Cai [12] constructed a finite set of mappings, named a vocabulary, such that any continuous function can be approximated by compositing a sequence from the vocabulary. This is similar to the concept of compositionality in linguistics, which is the idea that a finite vocabulary of basic elements can be combined via grammar to express an infinite range of meanings.

Bounded depth and bounded width

[edit]

The bounded depth and bounded width case was first studied by Maiorov and Pinkus in 1999.[13] They showed that there exists an analytic sigmoidal activation function such that two hidden layer neural networks with bounded number of units in hidden layers are universal approximators.

Guliyev and Ismailov[14] constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers.

[15] constructed single hidden layer networks with bounded width that are still universal approximators for univariate functions. However, this does not apply for multivariable functions.

[16] obtained precise quantitative information on the depth and width required to approximate a target function by deep and wide ReLU neural networks.

Quantitative bounds

[edit]

The question of minimal possible width for universality was first studied in 2021, Park et al obtained the minimum width required for the universal approximation of Lp functions using feed-forward neural networks with ReLU as activation functions.[17] Similar results that can be directly applied to residual neural networks were also obtained in the same year by Paulo Tabuada and Bahman Gharesifard using control-theoretic arguments.[18][19] In 2023, Cai obtained the optimal minimum width bound for the universal approximation.[20]

For the arbitrary depth case, Leonie Papon and Anastasis Kratsios derived explicit depth estimates depending on the regularity of the target function and of the activation function.[21]

The Kolmogorov–Arnold representation theorem is similar in spirit. Indeed, certain neural network families can directly apply the Kolmogorov–Arnold theorem to yield a universal approximation theorem. Robert Hecht-Nielsen showed that a three-layer neural network can approximate any continuous multivariate function.[22] This was extended to the discontinuous case by Vugar Ismailov.[23] In 2024, Ziming Liu and co-authors showed a practical application.[24]

Discontinuous activation functions,[5] noncompact domains,[11][25] certifiable networks,[26]random neural networks,[27] and alternative network architectures and topologies.[11][28]

The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. For input dimension dx and output dimension dy the minimum width required for the universal approximation of the Lp functions is exactly max{dx + 1, dy} (for a ReLU network). More generally this also holds if both ReLU and a threshold activation function are used.[17]

Universal function approximation on graphs (or rather on graph isomorphism classes) by popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test.[29] In 2020,[30] a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying O ( | V | ⋅ | E | ) {\displaystyle {\mathcal {O}}(\left|V\right|\cdot \left|E\right|)} {\displaystyle {\mathcal {O}}(\left|V\right|\cdot \left|E\right|)}-runtime method that performed at state of the art on a collection of benchmarks (where V {\displaystyle V} {\displaystyle V} and E {\displaystyle E} {\displaystyle E} are the sets of nodes and edges of the graph respectively).

There are also a variety of results between non-Euclidean spaces[31] and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture,[32][33] radial basis functions,[34] or neural networks with specific properties.[35][36]

Arbitrary-width case

[edit]

A spate of papers in the 1980s—1990s, from George Cybenko and Kurt Hornik [de] etc, established several universal approximation theorems for arbitrary width and bounded depth.[37][3][38][4] See[39][40][6] for reviews. The following is the most often quoted:

Universal approximation theorem — Let C ( X , R m ) {\displaystyle C(X,\mathbb {R} ^{m})} {\displaystyle C(X,\mathbb {R} ^{m})} denote the set of continuous functions from a subset X {\displaystyle X} {\displaystyle X} of a Euclidean R n {\displaystyle \mathbb {R} ^{n}} {\displaystyle \mathbb {R} ^{n}} space to a Euclidean space R m {\displaystyle \mathbb {R} ^{m}} {\displaystyle \mathbb {R} ^{m}}. Let σ ∈ C ( R , R ) {\displaystyle \sigma \in C(\mathbb {R} ,\mathbb {R} )} {\displaystyle \sigma \in C(\mathbb {R} ,\mathbb {R} )}. Note that ( σ ∘ x ) i = σ ( x i ) {\displaystyle (\sigma \circ x)_{i}=\sigma (x_{i})} {\displaystyle (\sigma \circ x)_{i}=\sigma (x_{i})}, so σ ∘ x {\displaystyle \sigma \circ x} {\displaystyle \sigma \circ x} denotes σ {\displaystyle \sigma } {\displaystyle \sigma } applied to each component of x {\displaystyle x} {\displaystyle x}.

Then σ {\displaystyle \sigma } {\displaystyle \sigma } is not polynomial if and only if for every n ∈ N {\displaystyle n\in \mathbb {N} } {\displaystyle n\in \mathbb {N} }, m ∈ N {\displaystyle m\in \mathbb {N} } {\displaystyle m\in \mathbb {N} }, compact K ⊆ R n {\displaystyle K\subseteq \mathbb {R} ^{n}} {\displaystyle K\subseteq \mathbb {R} ^{n}}, f ∈ C ( K , R m ) , ε > 0 {\displaystyle f\in C(K,\mathbb {R} ^{m}),\varepsilon >0} {\displaystyle f\in C(K,\mathbb {R} ^{m}),\varepsilon >0} there exist k ∈ N {\displaystyle k\in \mathbb {N} } {\displaystyle k\in \mathbb {N} }, A ∈ R k × n {\displaystyle A\in \mathbb {R} ^{k\times n}} {\displaystyle A\in \mathbb {R} ^{k\times n}}, b ∈ R k {\displaystyle b\in \mathbb {R} ^{k}} {\displaystyle b\in \mathbb {R} ^{k}}, C ∈ R m × k {\displaystyle C\in \mathbb {R} ^{m\times k}} {\displaystyle C\in \mathbb {R} ^{m\times k}} such that sup x ∈ K ‖ f ( x ) − g ( x ) ‖ < ε {\displaystyle \sup _{x\in K}\|f(x)-g(x)\|<\varepsilon } {\displaystyle \sup _{x\in K}\|f(x)-g(x)\|<\varepsilon }where g ( x ) = C ⋅ ( σ ∘ ( A ⋅ x + b ) ) {\displaystyle g(x)=C\cdot (\sigma \circ (A\cdot x+b))} {\displaystyle g(x)=C\cdot (\sigma \circ (A\cdot x+b))}

Also, certain non-continuous activation functions can be used to approximate a sigmoid function, which then allows the above theorem to apply to those functions. For example, the step function works. In particular, this shows that a perceptron network with a single infinitely wide hidden layer can approximate arbitrary functions.

Such an f {\displaystyle f} {\displaystyle f} can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers.

Proof sketch

It suffices to prove the case where m = 1 {\displaystyle m=1} {\displaystyle m=1}, since uniform convergence in R m {\displaystyle \mathbb {R} ^{m}} {\displaystyle \mathbb {R} ^{m}} is just uniform convergence in each coordinate.

Let F σ {\displaystyle F_{\sigma }} {\displaystyle F_{\sigma }} be the set of all one-hidden-layer neural networks constructed with σ {\displaystyle \sigma } {\displaystyle \sigma }. Let C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} be the set of all C ( R d , R ) {\displaystyle C(\mathbb {R} ^{d},\mathbb {R} )} {\displaystyle C(\mathbb {R} ^{d},\mathbb {R} )} with compact support.

If the function is a polynomial of degree d {\displaystyle d} {\displaystyle d}, then F σ {\displaystyle F_{\sigma }} {\displaystyle F_{\sigma }} is contained in the closed subspace of all polynomials of degree d {\displaystyle d} {\displaystyle d}, so its closure is also contained in it, which is not all of C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )}.

Otherwise, we show that F σ {\displaystyle F_{\sigma }} {\displaystyle F_{\sigma }}'s closure is all of C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )}. Suppose we can construct arbitrarily good approximations of the ramp function r ( x ) = { − 1 if x < − 1 + x if | x | ≤ 1 + 1 if x > 1 {\displaystyle r(x)={\begin{cases}-1&{\text{if }}x<-1\\{\phantom {+}}x&{\text{if }}|x|\leq 1\\{\phantom {+}}1&{\text{if }}x>1\\\end{cases}}} {\displaystyle r(x)={\begin{cases}-1&{\text{if }}x<-1\\{\phantom {+}}x&{\text{if }}|x|\leq 1\\{\phantom {+}}1&{\text{if }}x>1\\\end{cases}}}then it can be combined to construct arbitrary compactly-supported continuous function to arbitrary precision. It remains to approximate the ramp function.

Any of the commonly used activation functions used in machine learning can obviously be used to approximate the ramp function, or first approximate the ReLU, then the ramp function.

if σ {\displaystyle \sigma } {\displaystyle \sigma } is "squashing", that is, it has limits σ ( − ∞ ) < σ ( + ∞ ) {\displaystyle \sigma (-\infty )<\sigma (+\infty )} {\displaystyle \sigma (-\infty )<\sigma (+\infty )}, then one can first affinely scale down its x-axis so that its graph looks like a step-function with two sharp "overshoots", then make a linear sum of enough of them to make a "staircase" approximation of the ramp function. With more steps of the staircase, the overshoots smooth out and we get arbitrarily good approximation of the ramp function.

The case where σ {\displaystyle \sigma } {\displaystyle \sigma } is a generic non-polynomial function is harder, and the reader is directed to.[6]

The above proof has not specified how one might use a ramp function to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )}. A sketch of the proof is that one can first construct flat bump functions, intersect them to obtain spherical bump functions that approximate the Dirac delta function, then use those to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )}.[41] The original proofs, such as the one by Cybenko, use methods from functional analysis, including the Hahn-Banach and Riesz–Markov–Kakutani representation theorems.

Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K} {\displaystyle K}. The proof does not describe how the function would be extrapolated outside of the region.

The problem with polynomials may be removed by allowing the outputs of the hidden layers to be multiplied together (the "pi-sigma networks"), yielding the generalization:[38]

Universal approximation theorem for pi-sigma networks — With any nonconstant activation function, a one-hidden-layer pi-sigma network is a universal approximator.

Arbitrary-depth case

[edit]

The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017.[9] They showed that networks of width n + 4 with ReLU activation functions can approximate any Lebesgue-integrable function on _n_-dimensional input space with respect to L 1 {\displaystyle L^{1}} {\displaystyle L^{1}} distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n, this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper[9] it was shown that ReLU networks with width n + 1 were sufficient to approximate any continuous function of _n_-dimensional input variables.[42] The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to.[43]

Universal approximation theorem (L1 distance, ReLU activation, arbitrary depth, minimal width) — For any Bochner–Lebesgue p-integrable function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and any ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0}, there exists a fully connected ReLU network F {\displaystyle F} {\displaystyle F} of width exactly d m = max { n + 1 , m } {\displaystyle d_{m}=\max\{n+1,m\}} {\displaystyle d_{m}=\max\{n+1,m\}}, satisfying ∫ R n ‖ f ( x ) − F ( x ) ‖ p d x < ε . {\displaystyle \int _{\mathbb {R} ^{n}}\|f(x)-F(x)\|^{p}\,\mathrm {d} x<\varepsilon .} {\displaystyle \int _{\mathbb {R} ^{n}}\|f(x)-F(x)\|^{p}\,\mathrm {d} x<\varepsilon .}Moreover, there exists a function f ∈ L p ( R n , R m ) {\displaystyle f\in L^{p}(\mathbb {R} ^{n},\mathbb {R} ^{m})} {\displaystyle f\in L^{p}(\mathbb {R} ^{n},\mathbb {R} ^{m})} and some ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0}, for which there is no fully connected ReLU network of width less than d m = max { n + 1 , m } {\displaystyle d_{m}=\max\{n+1,m\}} {\displaystyle d_{m}=\max\{n+1,m\}} satisfying the above approximation bound.

Remark: If the activation is replaced by leaky-ReLU, and the input is restricted in a compact domain, then the exact minimum width is[20] d m = max { n , m , 2 } {\displaystyle d_{m}=\max\{n,m,2\}} {\displaystyle d_{m}=\max\{n,m,2\}}.

Quantitative refinement: In the case where f : [ 0 , 1 ] n → R {\displaystyle f:[0,1]^{n}\rightarrow \mathbb {R} } {\displaystyle f:[0,1]^{n}\rightarrow \mathbb {R} }, (i.e. m = 1 {\displaystyle m=1} {\displaystyle m=1}) and σ {\displaystyle \sigma } {\displaystyle \sigma } is the ReLU activation function, the exact depth and width for a ReLU network to achieve ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } error is also known.[44] If, moreover, the target function f {\displaystyle f} {\displaystyle f} is smooth, then the required number of layer and their width can be exponentially smaller.[45] Even if f {\displaystyle f} {\displaystyle f} is not smooth, the curse of dimensionality can be broken if f {\displaystyle f} {\displaystyle f} admits additional "compositional structure".[46][47]

Together, the central result of[11] yields the following universal approximation theorem for networks with bounded width (see also[7] for the first result of this kind).

Universal approximation theorem (Uniform non-affine activation, arbitrary depth, constrained width). — Let X {\displaystyle {\mathcal {X}}} {\displaystyle {\mathcal {X}}} be a compact subset of R d {\displaystyle \mathbb {R} ^{d}} {\displaystyle \mathbb {R} ^{d}}. Let σ : R → R {\displaystyle \sigma :\mathbb {R} \to \mathbb {R} } {\displaystyle \sigma :\mathbb {R} \to \mathbb {R} } be any non-affine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let N d , D : d + D + 2 σ {\displaystyle {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} {\displaystyle {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} denote the space of feed-forward neural networks with d {\displaystyle d} {\displaystyle d} input neurons, D {\displaystyle D} {\displaystyle D} output neurons, and an arbitrary number of hidden layers each with d + D + 2 {\displaystyle d+D+2} {\displaystyle d+D+2} neurons, such that every hidden neuron has activation function σ {\displaystyle \sigma } {\displaystyle \sigma } and every output neuron has the identity as its activation function, with input layer ϕ {\displaystyle \phi } {\displaystyle \phi } and output layer ρ {\displaystyle \rho } {\displaystyle \rho }. Then given any ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0} and any f ∈ C ( X , R D ) {\displaystyle f\in C({\mathcal {X}},\mathbb {R} ^{D})} {\displaystyle f\in C({\mathcal {X}},\mathbb {R} ^{D})}, there exists f ^ ∈ N d , D : d + D + 2 σ {\displaystyle {\hat {f}}\in {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} {\displaystyle {\hat {f}}\in {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} such that sup x ∈ X ‖ f ^ ( x ) − f ( x ) ‖ < ε . {\displaystyle \sup _{x\in {\mathcal {X}}}\left\|{\hat {f}}(x)-f(x)\right\|<\varepsilon .} {\displaystyle \sup _{x\in {\mathcal {X}}}\left\|{\hat {f}}(x)-f(x)\right\|<\varepsilon .}

In other words, N {\displaystyle {\mathcal {N}}} {\displaystyle {\mathcal {N}}} is dense in C ( X ; R D ) {\displaystyle C({\mathcal {X}};\mathbb {R} ^{D})} {\displaystyle C({\mathcal {X}};\mathbb {R} ^{D})} with respect to the topology of uniform convergence.

Quantitative refinement: The number of layers and the width of each layer required to approximate f {\displaystyle f} {\displaystyle f} to ε {\displaystyle \varepsilon } {\displaystyle \varepsilon } precision known;[21] moreover, the result hold true when X {\displaystyle {\mathcal {X}}} {\displaystyle {\mathcal {X}}} and R D {\displaystyle \mathbb {R} ^{D}} {\displaystyle \mathbb {R} ^{D}} are replaced with any non-positively curved Riemannian manifold.

Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.[9][10][48]

Bounded depth and bounded width case

[edit]

The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus.[13] Their remarkable result revealed that such networks can be universal approximators and for achieving this property two hidden layers are enough.

Universal approximation theorem:[13] — There exists an activation function σ {\displaystyle \sigma } {\displaystyle \sigma } which is analytic, strictly increasing and sigmoidal and has the following property: For any f ∈ C [ 0 , 1 ] d {\displaystyle f\in C[0,1]^{d}} {\displaystyle f\in C[0,1]^{d}} and ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0} there exist constants d i , c i j , θ i j , γ i {\displaystyle d_{i},c_{ij},\theta _{ij},\gamma _{i}} {\displaystyle d_{i},c_{ij},\theta _{ij},\gamma _{i}}, and vectors w i j ∈ R d {\displaystyle \mathbf {w} ^{ij}\in \mathbb {R} ^{d}} {\displaystyle \mathbf {w} ^{ij}\in \mathbb {R} ^{d}} for which | f ( x ) − ∑ i = 1 6 d + 3 d i σ ( ∑ j = 1 3 d c i j σ ( w i j ⋅ x − θ i j ) − γ i ) | < ε {\displaystyle \left\vert f(\mathbf {x} )-\sum _{i=1}^{6d+3}d_{i}\sigma \left(\sum _{j=1}^{3d}c_{ij}\sigma (\mathbf {w} ^{ij}\cdot \mathbf {x-} \theta _{ij})-\gamma _{i}\right)\right\vert <\varepsilon } {\displaystyle \left\vert f(\mathbf {x} )-\sum _{i=1}^{6d+3}d_{i}\sigma \left(\sum _{j=1}^{3d}c_{ij}\sigma (\mathbf {w} ^{ij}\cdot \mathbf {x-} \theta _{ij})-\gamma _{i}\right)\right\vert <\varepsilon }for all x = ( x 1 , . . . , x d ) ∈ [ 0 , 1 ] d {\displaystyle \mathbf {x} =(x_{1},...,x_{d})\in [0,1]^{d}} {\displaystyle \mathbf {x} =(x_{1},...,x_{d})\in [0,1]^{d}}.

This is an existence result. It says that activation functions providing universal approximation property for bounded depth bounded width networks exist. Using certain algorithmic and computer programming techniques, Guliyev and Ismailov efficiently constructed such activation functions depending on a numerical parameter. The developed algorithm allows one to compute the activation functions at any point of the real axis instantly. For the algorithm and the corresponding computer code see.[14] The theoretical result can be formulated as follows.

Universal approximation theorem:[14][15] — Let [ a , b ] {\displaystyle [a,b]} {\displaystyle [a,b]} be a finite segment of the real line, s = b − a {\displaystyle s=b-a} {\displaystyle s=b-a} and λ {\displaystyle \lambda } {\displaystyle \lambda } be any positive number. Then one can algorithmically construct a computable sigmoidal activation function σ : R → R {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} }, which is infinitely differentiable, strictly increasing on ( − ∞ , s ) {\displaystyle (-\infty ,s)} {\displaystyle (-\infty ,s)}, λ {\displaystyle \lambda } {\displaystyle \lambda } -strictly increasing on [ s , + ∞ ) {\displaystyle [s,+\infty )} ![{\displaystyle s,+\infty )}, and satisfies the following properties:

  1. For any f ∈ C [ a , b ] {\displaystyle f\in C[a,b]} {\displaystyle f\in C[a,b]} and ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0} there exist numbers c 1 , c 2 , θ 1 {\displaystyle c_{1},c_{2},\theta _{1}} {\displaystyle c_{1},c_{2},\theta _{1}} and θ 2 {\displaystyle \theta _{2}} {\displaystyle \theta _{2}} such that for all x ∈ [ a , b ] {\displaystyle x\in [a,b]} {\displaystyle x\in [a,b]} | f ( x ) − c 1 σ ( x − θ 1 ) − c 2 σ ( x − θ 2 ) | < ε {\displaystyle |f(x)-c_{1}\sigma (x-\theta _{1})-c_{2}\sigma (x-\theta _{2})|<\varepsilon } {\displaystyle |f(x)-c_{1}\sigma (x-\theta _{1})-c_{2}\sigma (x-\theta _{2})|<\varepsilon }
  2. For any continuous function F {\displaystyle F} {\displaystyle F} on the d {\displaystyle d} {\displaystyle d}-dimensional box [ a , b ] d {\displaystyle [a,b]^{d}} {\displaystyle [a,b]^{d}} and ε > 0 {\displaystyle \varepsilon >0} {\displaystyle \varepsilon >0}, there exist constants e p {\displaystyle e_{p}} {\displaystyle e_{p}}, c p q {\displaystyle c_{pq}} {\displaystyle c_{pq}}, θ p q {\displaystyle \theta _{pq}} {\displaystyle \theta _{pq}} and ζ p {\displaystyle \zeta _{p}} {\displaystyle \zeta _{p}} such that the inequality | F ( x ) − ∑ p = 1 2 d + 2 e p σ ( ∑ q = 1 d c p q σ ( w q ⋅ x − θ p q ) − ζ p ) | < ε {\displaystyle \left|F(\mathbf {x} )-\sum _{p=1}^{2d+2}e_{p}\sigma \left(\sum _{q=1}^{d}c_{pq}\sigma (\mathbf {w} ^{q}\cdot \mathbf {x} -\theta _{pq})-\zeta _{p}\right)\right|<\varepsilon } {\displaystyle \left|F(\mathbf {x} )-\sum _{p=1}^{2d+2}e_{p}\sigma \left(\sum _{q=1}^{d}c_{pq}\sigma (\mathbf {w} ^{q}\cdot \mathbf {x} -\theta _{pq})-\zeta _{p}\right)\right|<\varepsilon } holds for all x = ( x 1 , … , x d ) ∈ [ a , b ] d {\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{d})\in [a,b]^{d}} {\displaystyle \mathbf {x} =(x_{1},\ldots ,x_{d})\in [a,b]^{d}}. Here the weights w q {\displaystyle \mathbf {w} ^{q}} {\displaystyle \mathbf {w} ^{q}}, q = 1 , … , d {\displaystyle q=1,\ldots ,d} {\displaystyle q=1,\ldots ,d}, are fixed as follows: w 1 = ( 1 , 0 , … , 0 ) , w 2 = ( 0 , 1 , … , 0 ) , … , w d = ( 0 , 0 , … , 1 ) . {\displaystyle \mathbf {w} ^{1}=(1,0,\ldots ,0),\quad \mathbf {w} ^{2}=(0,1,\ldots ,0),\quad \ldots ,\quad \mathbf {w} ^{d}=(0,0,\ldots ,1).} {\displaystyle \mathbf {w} ^{1}=(1,0,\ldots ,0),\quad \mathbf {w} ^{2}=(0,1,\ldots ,0),\quad \ldots ,\quad \mathbf {w} ^{d}=(0,0,\ldots ,1).} In addition, all the coefficients e p {\displaystyle e_{p}} {\displaystyle e_{p}}, except one, are equal.

Here “ σ : R → R {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } is λ {\displaystyle \lambda } {\displaystyle \lambda }-strictly increasing on some set X {\displaystyle X} {\displaystyle X}” means that there exists a strictly increasing function u : X → R {\displaystyle u\colon X\to \mathbb {R} } {\displaystyle u\colon X\to \mathbb {R} } such that | σ ( x ) − u ( x ) | ≤ λ {\displaystyle |\sigma (x)-u(x)|\leq \lambda } {\displaystyle |\sigma (x)-u(x)|\leq \lambda } for all x ∈ X {\displaystyle x\in X} {\displaystyle x\in X}. Clearly, a λ {\displaystyle \lambda } {\displaystyle \lambda }-increasing function behaves like a usual increasing function as λ {\displaystyle \lambda } {\displaystyle \lambda } gets small. In the "depth-width" terminology, the above theorem says that for certain activation functions depth- 2 {\displaystyle 2} {\displaystyle 2} width- 2 {\displaystyle 2} {\displaystyle 2} networks are universal approximators for univariate functions and depth- 3 {\displaystyle 3} {\displaystyle 3} width- ( 2 d + 2 ) {\displaystyle (2d+2)} {\displaystyle (2d+2)} networks are universal approximators for d {\displaystyle d} {\displaystyle d}-variable functions ( d > 1 {\displaystyle d>1} {\displaystyle d>1}).

  1. ^ a b Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (January 1989). "Multilayer feedforward networks are universal approximators". Neural Networks. 2 (5): 359–366. doi:10.1016/0893-6080(89)90020-8.
  2. ^ Balázs Csanád Csáji (2001) Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary
  3. ^ a b Cybenko, G. (1989). "Approximation by superpositions of a sigmoidal function". Mathematics of Control, Signals, and Systems. 2 (4): 303–314. Bibcode:1989MCSS....2..303C. CiteSeerX 10.1.1.441.7873. doi:10.1007/BF02551274. S2CID 3958369.
  4. ^ a b Hornik, Kurt (1991). "Approximation capabilities of multilayer feedforward networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-T. S2CID 7343126.
  5. ^ a b Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function". Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5. S2CID 206089312.
  6. ^ a b c Pinkus, Allan (January 1999). "Approximation theory of the MLP model in neural networks". Acta Numerica. 8: 143–195. Bibcode:1999AcNum...8..143P. doi:10.1017/S0962492900002919. S2CID 16800260.
  7. ^ a b Gripenberg, Gustaf (June 2003). "Approximation by neural networks with a bounded number of nodes at each level". Journal of Approximation Theory. 122 (2): 260–266. doi:10.1016/S0021-9045(03)00078-9.
  8. ^ Yarotsky, Dmitry (October 2017). "Error bounds for approximations with deep ReLU networks". Neural Networks. 94: 103–114. arXiv:1610.01145. doi:10.1016/j.neunet.2017.07.002. PMID 28756334. S2CID 426133.
  9. ^ a b c d Lu, Zhou; Pu, Hongming; Wang, Feicheng; Hu, Zhiqiang; Wang, Liwei (2017). "The Expressive Power of Neural Networks: A View from the Width". Advances in Neural Information Processing Systems. 30. Curran Associates: 6231–6239. arXiv:1709.02540.
  10. ^ a b Hanin, Boris; Sellke, Mark (2018). "Approximating Continuous Functions by ReLU Nets of Minimal Width". arXiv:1710.11278 [stat.ML].
  11. ^ a b c d Kidger, Patrick; Lyons, Terry (July 2020). Universal Approximation with Deep Narrow Networks. Conference on Learning Theory. arXiv:1905.08539.
  12. ^ Yongqiang, Cai (2024). "Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions". ICML: 5189–5208. arXiv:2305.12205.
  13. ^ a b c Maiorov, Vitaly; Pinkus, Allan (April 1999). "Lower bounds for approximation by MLP neural networks". Neurocomputing. 25 (1–3): 81–91. doi:10.1016/S0925-2312(98)00111-8.
  14. ^ a b c Guliyev, Namig; Ismailov, Vugar (November 2018). "Approximation capability of two hidden layer feedforward neural networks with fixed weights". Neurocomputing. 316: 262–269. arXiv:2101.09181. doi:10.1016/j.neucom.2018.07.075. S2CID 52285996.
  15. ^ a b Guliyev, Namig; Ismailov, Vugar (February 2018). "On the approximation by single hidden layer feedforward neural networks with fixed weights". Neural Networks. 98: 296–304. arXiv:1708.06219. doi:10.1016/j.neunet.2017.12.007. PMID 29301110. S2CID 4932839.
  16. ^ Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (January 2022). "Optimal approximation rate of ReLU networks in terms of width and depth". Journal de Mathématiques Pures et Appliquées. 157: 101–135. arXiv:2103.00502. doi:10.1016/j.matpur.2021.07.009. S2CID 232075797.
  17. ^ a b Park, Sejun; Yun, Chulhee; Lee, Jaeho; Shin, Jinwoo (2021). Minimum Width for Universal Approximation. International Conference on Learning Representations. arXiv:2006.08859.
  18. ^ Tabuada, Paulo; Gharesifard, Bahman (2021). Universal approximation power of deep residual neural networks via nonlinear control theory. International Conference on Learning Representations. arXiv:2007.06007.
  19. ^ Tabuada, Paulo; Gharesifard, Bahman (May 2023). "Universal Approximation Power of Deep Residual Neural Networks Through the Lens of Control". IEEE Transactions on Automatic Control. 68 (5): 2715–2728. doi:10.1109/TAC.2022.3190051. S2CID 250512115. (Erratum: doi:10.1109/TAC.2024.3390099)
  20. ^ a b Cai, Yongqiang (2023-02-01). "Achieve the Minimum Width of Neural Networks for Universal Approximation". ICLR. arXiv:2209.11395.
  21. ^ a b Kratsios, Anastasis; Papon, Léonie (2022). "Universal Approximation Theorems for Differentiable Geometric Deep Learning". Journal of Machine Learning Research. 23 (196): 1–73. arXiv:2101.05390.
  22. ^ Hecht-Nielsen, Robert (1987). "Kolmogorov's mapping neural network existence theorem". Proceedings of International Conference on Neural Networks, 1987. 3: 11–13.
  23. ^ Ismailov, Vugar E. (July 2023). "A three layer neural network can represent any multivariate function". Journal of Mathematical Analysis and Applications. 523 (1): 127096. arXiv:2012.03016. doi:10.1016/j.jmaa.2023.127096. S2CID 265100963.
  24. ^ Liu, Ziming; Wang, Yixuan; Vaidya, Sachin; Ruehle, Fabian; Halverson, James; Soljačić, Marin; Hou, Thomas Y.; Tegmark, Max (2024-05-24). "KAN: Kolmogorov-Arnold Networks". arXiv:2404.19756 [cs.LG].
  25. ^ van Nuland, Teun (2024). "Noncompact uniform universal approximation". Neural Networks. 173. arXiv:2308.03812. doi:10.1016/j.neunet.2024.106181. PMID 38412737.
  26. ^ Baader, Maximilian; Mirman, Matthew; Vechev, Martin (2020). Universal Approximation with Certified Networks. ICLR.
  27. ^ Gelenbe, Erol; Mao, Zhi Hong; Li, Yan D. (1999). "Function approximation with spiked random networks". IEEE Transactions on Neural Networks. 10 (1): 3–9. doi:10.1109/72.737488. PMID 18252498.
  28. ^ Lin, Hongzhou; Jegelka, Stefanie (2018). ResNet with one-neuron hidden layers is a Universal Approximator. Advances in Neural Information Processing Systems. Vol. 30. Curran Associates. pp. 6169–6178.
  29. ^ Xu, Keyulu; Hu, Weihua; Leskovec, Jure; Jegelka, Stefanie (2019). How Powerful are Graph Neural Networks?. International Conference on Learning Representations.
  30. ^ Brüel-Gabrielsson, Rickard (2020). Universal Function Approximation on Graphs. Advances in Neural Information Processing Systems. Vol. 33. Curran Associates.
  31. ^ Kratsios, Anastasis; Bilokopytov, Eugene (2020). Non-Euclidean Universal Approximation (PDF). Advances in Neural Information Processing Systems. Vol. 33. Curran Associates.
  32. ^ Zhou, Ding-Xuan (2020). "Universality of deep convolutional neural networks". Applied and Computational Harmonic Analysis. 48 (2): 787–794. arXiv:1805.10769. doi:10.1016/j.acha.2019.06.004. S2CID 44113176.
  33. ^ Heinecke, Andreas; Ho, Jinn; Hwang, Wen-Liang (2020). "Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets". IEEE Signal Processing Letters. 27: 1175–1179. Bibcode:2020ISPL...27.1175H. doi:10.1109/LSP.2020.3005051. S2CID 220669183.
  34. ^ Park, J.; Sandberg, I. W. (1991). "Universal Approximation Using Radial-Basis-Function Networks". Neural Computation. 3 (2): 246–257. doi:10.1162/neco.1991.3.2.246. PMID 31167308. S2CID 34868087.
  35. ^ Yarotsky, Dmitry (2021). "Universal Approximations of Invariant Maps by Neural Networks". Constructive Approximation. 55: 407–474. arXiv:1804.10306. doi:10.1007/s00365-021-09546-1. S2CID 13745401.
  36. ^ Zakwan, Muhammad; d’Angelo, Massimiliano; Ferrari-Trecate, Giancarlo (2023). "Universal Approximation Property of Hamiltonian Deep Neural Networks". IEEE Control Systems Letters: 1. arXiv:2303.12147. doi:10.1109/LCSYS.2023.3288350. S2CID 257663609.
  37. ^ Funahashi, Ken-Ichi (January 1989). "On the approximate realization of continuous mappings by neural networks". Neural Networks. 2 (3): 183–192. doi:10.1016/0893-6080(89)90003-8.
  38. ^ a b Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (January 1989). "Multilayer feedforward networks are universal approximators". Neural Networks. 2 (5): 359–366. doi:10.1016/0893-6080(89)90020-8.
  39. ^ Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation, Volume 2, Prentice Hall. ISBN 0-13-273350-1.
  40. ^ Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p. 48
  41. ^ Nielsen, Michael A. (2015). "Neural Networks and Deep Learning".
  42. ^ Hanin, B. (2018). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278.
  43. ^ Park, Yun, Lee, Shin, Sejun, Chulhee, Jaeho, Jinwoo (2020-09-28). "Minimum Width for Universal Approximation". ICLR. arXiv:2006.08859.{{[cite journal](/wiki/Template:Cite%5Fjournal "Template:Cite journal")}}: CS1 maint: multiple names: authors list (link)
  44. ^ Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (January 2022). "Optimal approximation rate of ReLU networks in terms of width and depth". Journal de Mathématiques Pures et Appliquées. 157: 101–135. arXiv:2103.00502. doi:10.1016/j.matpur.2021.07.009. S2CID 232075797.
  45. ^ Lu, Jianfeng; Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (January 2021). "Deep Network Approximation for Smooth Functions". SIAM Journal on Mathematical Analysis. 53 (5): 5465–5506. arXiv:2001.03040. doi:10.1137/20M134695X. S2CID 210116459.
  46. ^ Juditsky, Anatoli B.; Lepski, Oleg V.; Tsybakov, Alexandre B. (2009-06-01). "Nonparametric estimation of composite functions". The Annals of Statistics. 37 (3). doi:10.1214/08-aos611. ISSN 0090-5364. S2CID 2471890.
  47. ^ Poggio, Tomaso; Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli (2017-03-14). "Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review". International Journal of Automation and Computing. 14 (5): 503–519. arXiv:1611.00740. doi:10.1007/s11633-017-1054-2. ISSN 1476-8186. S2CID 15562587.
  48. ^ Johnson, Jesse (2019). Deep, Skinny Neural Networks are not Universal Approximators. International Conference on Learning Representations.