Circuit Complexity, Proof Complexity, and Polynomial Identity Testing (original) (raw)
Related papers
Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds
computational complexity, 2004
We show that derandomizing Polynomial Identity Testing is, essentially, equivalent to proving circuit lower bounds for NEXP. More precisely, we prove that if one can test in polynomial time (or, even, nondeterministic subexponential time, infinitely often) whether a given arithmetic circuit over integers computes an identically zero polynomial, then either (i) NEXP ⊂ P/poly or (ii) Permanent is not computable by polynomial-size arithmetic circuits. We also prove a (partial) converse: If Permanent requires superpolynomial-size arithmetic circuits, then one can test in subexponential time whether a given arithmetic formula computes an identically zero polynomial. Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or, even, coRP ⊆ ∩ >0NTIME(2 n), infinitely often), then NEXP is not computable by polynomial-size arithmetic circuits. Thus, establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.
On proving circuit lower bounds against the polynomial-time hierarchy: positive and negative results
We consider the problem of proving circuit lower bounds against the polynomialtime hierarchy. We give both positive and negative results. For the positive side, for any fixed integer k > 0, we give an explicit Σ p 2 language, acceptable by a Σ p 2 -machine with running time O(n k 2 +k ), that requires circuit size > n k . This provides a constructive version of an existence theorem of Kannan [Kan82]. Our main theorem is on the negative side. We give evidence that it is infeasible to give relativizable proofs that any single language in the polynomialtime hierarchy requires super polynomial circuit size. Our proof techniques are based on the decision tree version of the Switching Lemma for constant depth circuits and Nisan-Wigderson pseudorandom generator.
On proving circuit lower bounds against the polynomial-time hierarchy
We consider the problem of proving circuit lower bounds against the polynomialtime hierarchy. We give both positive and negative results. For the positive side, for any fixed integer k > 0, we give an explicit Σ p 2 language, acceptable by a Σ p 2 -machine with running time O(n k 2 +k ), that requires circuit size > n k . This provides a constructive version of an existence theorem of Kannan [Kan82]. Our main theorem is on the negative side. We give evidence that it is infeasible to give relativizable proofs that any single language in the polynomialtime hierarchy requires super polynomial circuit size. Our proof techniques are based on the decision tree version of the Switching Lemma for constant depth circuits and Nisan-Wigderson pseudorandom generator.
2011
Polynomial identity testing and arithmetic circuit lower bounds are two central questions in algebraic complexity theory. It is an intriguing fact that these questions are actually related. One of the authors of the present paper has recently proposed a "real τ -conjecture" which is inspired by this connection. The real τ -conjecture states that the number of real roots of a sum of products of sparse univariate polynomials should be polynomially bounded. It implies a superpolynomial lower bound on the size of arithmetic circuits computing the permanent polynomial. In this paper we show that the real-τ conjecture holds true for a restricted class of sums of products of sparse polynomials. This result yields lower bounds for a restricted class of depth-4 circuits: we show that polynomial size circuits from this class cannot compute the permanent, and we also give a deterministic polynomial identity testing algorithm for the same class of circuits.
The Polynomial Method in Circuit Complexity Applied to Algorithm Design (Invited Talk)
In circuit complexity, the polynomial method is a general approach to proving circuit lower bounds in restricted settings. One shows that functions computed by sufficiently restricted circuits are "correlated" in some way with a low-complexity polynomial, where complexity may be measured by the degree of the polynomial or the number of monomials. Then, results limiting the capabilities of low-complexity polynomials are extended to the restricted circuits. Old theorems proved by this method have recently found interesting applications to the design of algorithms for basic problems in the theory of computing. This paper surveys some of these applications, and gives a few new ones.
Fixed-Polynomial Size Circuit Bounds
2009 24th Annual IEEE Conference on Computational Complexity, 2009
In 1982, Kannan showed that Σ P 2 does not have n k-sized circuits for any k. Do smaller classes also admit such circuit lower bounds? Despite several improvements of Kannan's result, we still cannot prove that P NP does not have linear size circuits. Work of Aaronson and Wigderson provides strong evidence-the "algebrization" barrier-that current techniques have inherent limitations in this respect. We explore questions about fixed-polynomial size circuit lower bounds around and beyond the algebrization barrier. We find several connections, including
Proof complexity in algebraic systems and bounded depth Frege systems with modular counting
Computational Complexity, 1996
We prove a lower bound of the form N Ω(1) on the degree of polynomials in a Nullstellensatz refutation of the Count q polynomials over Z m , where q is a prime not dividing m. In addition, we give an explicit construction of a degree N Ω(1) design for the Count q principle over Z m . As a corollary, using Beame et al. (1994) we obtain a lower bound of the form 2 N Ω(1) for the number of formulas in a constant-depth Frege proof of the modular counting principle Count N q from instances of the counting principle Count M m . We discuss the polynomial calculus proof system and give a method of converting tree-like polynomial calculus derivations into low degree Nullstellensatz derivations.
Combinatorial hardness proofs for polynomial evaluation
Lecture Notes in Computer Science, 1998
We exhibit a new method for showing lower bounds for the time complexity of polynomial evaluation procedures given by straightline programs. Time, denoted by L, is measured in terms of nonscalar arithmetic operations. The time complexity function considered here is L 2 . As main difference with the previously known methods to study this problem, our general complexity method is purely combinatorial and does not need number theory or powerful tools from algebraic geometry. Using this method we are able to exhibit new families of polynomials "hard to compute" (this means that the time complexity function L 2 increases linearly in the degree, for some universal constant c > 0). We are also able to present in a uniform and easy way, almost all known specific families of univariate polynomials which are known to be hard to compute. Our method can also be applied to classical questions of transcendency in number theory and geometry. A list of (old and new) formal power series is given whose transcendency can be proved easily by our method.
Bounded Arithmetic and Propositional Proof Complexity
Logic of Computation, 1997
This is a survey of basic facts about bounded arithmetic and about the relationships between bounded arithmetic and propositional proof complexity. We introduce the theories S i 2 and T i 2 of bounded arithmetic and characterize their proof theoretic strength and their provably total functions in terms of the polynomial time hierarchy. We discuss other axiomatizations of bounded arithmetic, such as minimization axioms. It is shown that the bounded arithmetic hierarchy collapses if and only if bounded arithmetic proves that the polynomial hierarchy collapses.