Perceptron beyond the limit of capacity (original) (raw)
Related papers
On a nonhierarchical generalization of the Perceptron GREM
2021
Abstract. We introduce a nonlinear, nonhierarchical generalization of Derrida’s GREM and establish through a Sanov-type large deviation analysis both a Boltzmann-Gibbs principle as well as a Parisi formula for the limiting free energy. In line with the predictions of the Parisi theory, the free energy is given by the minimal value over all Parisi functionals/hierarchical structures in which the original model can be coarse-grained.
The Transition to Perfect Generalization in Perceptrons
Neural Computation, 1991
Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky et al. 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values ±1. We give a rigorous proof of such a phenomena. That is, we show, for α = 2.0821, that if at least αn examples are drawn from the uniform distribution on {+1, −1}n and classified according to a target perceptron wt ∈ {+1, −1}n as positive or negative according to whether wt·x is nonnegative or negative, then the probability is 2−(√n) that there is any other such perceptron consistent with the examples. Numerical results indicate further that perfect generalization holds for α as low as 1.5.