John Hitchcock - Academia.edu (original) (raw)

Papers by John Hitchcock

Research paper thumbnail of Length-Increasing Reductions for PSPACE-Completeness

Lecture Notes in Computer Science, 2013

Research paper thumbnail of Kolmogorov Complexity in Randomness Extraction

ACM Transactions on Computation Theory, 2011

We clarify the role of Kolmogorov complexity in the area of randomness extraction. We show that a... more We clarify the role of Kolmogorov complexity in the area of randomness extraction. We show that a computable function is an almost randomness extractor if and only if it is a Kolmogorov complexity extractor, thus establishing a fundamental equivalence between two forms of extraction studied in the literature: Kolmogorov extraction and randomness extraction. We present a distribution M k based on Kolmogorov complexity that is complete for randomness extraction in the sense that a computable function is an almost randomness extractor if and only if it extracts randomness from M k . *

Research paper thumbnail of 5 What Is Credible Evidence in Education?: The Role of the What Works Clearinghouse in Informing the Process

What counts as credible evidence in applied research and evaluation practice?, 2009

Research paper thumbnail of Eective Strong Dimension, Algorithmic Information, and Computational Complexity

The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdor... more The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems.

Research paper thumbnail of Partial Bi-immunity, Scaled Dimension, and NP-Completeness

Theory of Computing Systems

The Turing and many-one completeness notions for NP have been previously separated under measure,... more The Turing and many-one completeness notions for NP have been previously separated under measure, genericity, and bi-immunity hypotheses on NP. The proofs of all these results rely on the existence of a language in NP with almost everywhere hardness. In this paper we separate the same NP-completeness notions under a partial bi-immunity hypothesis that is weaker and only yields a language in NP that is hard to solve on most strings. This improves the results of Lutz and Mayordomo (Theoretical Computer Science, 1996), Ambos-Spies and Bentzien (Journal of Computer and System Sciences, 2000), and Pavan and Selman (Information and Computation, 2004). The proof of this theorem is a significant departure from previous work. We also use this theorem to separate the NP-completeness notions under a scaled dimension hypothesis on NP.

Research paper thumbnail of Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds

ACM Transactions on Computation Theory

This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit cla... more This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit class C\mathcal{C} has an efficient learning algorithm in Angluin’s model of exact learning via equivalence and membership queries [2], then we have the lower bound EXP NP Ë eq C\not\subseteq \mathcal{C}. We use entirely different techniques involving betting games [4] to remove the NP oracle and improve the lower bound to EXP Ë eq C\not\subseteq \mathcal{C}. This shows that it is even more difficult to design a learning algorithm for C\mathcal{C} than the results of Fortnow and Klivans indicated.

Research paper thumbnail of Comparing reductions to NP-complete sets

Information and Computation

Under the assumption that NP does not have p-measure 0, we investigate reductions to NP-complete ... more Under the assumption that NP does not have p-measure 0, we investigate reductions to NP-complete sets and prove the following:(1)Adaptive reductions are more powerful than nonadaptive reductions: there is a problem that is Turing-complete for NP but not truth-table-complete.(2)Strong nondeterministic reductions are more powerful than deterministic reductions: there is a problem that is SNP-complete for NP but not Turing-complete.(3)Every problem that is many-one complete for NP is complete under length-increasing reductions that are computed by polynomial-size circuits.The first item solves one of Lutz and Mayordomo’s “Twelve Problems in Resource-Bounded Measure” (1999). We also show that every many-one complete problem for NE is complete under one-to-one, length-increasing reductions that are computed by polynomial-size circuits.

Research paper thumbnail of Derandomizing Arthur-Merlin Games and Approximate Counting Implies Exponential-Size Lower Bounds

Computational Complexity

We show that if Arthur-Merlin protocols can be derandomized, then there is a language computable ... more We show that if Arthur-Merlin protocols can be derandomized, then there is a language computable in deterministic exponential-time with access to an NP oracle that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that have Arthur-Merlin protocols, can be computed by a deterministic polynomial-time algorithm with access to an NP oracle, then there is a language in ENP that requires circuits of size Ω(2 n /n). The lower bound in the conclusion of our theorem suffices to construct pseudorandom generators with exponential stretch. We also show that the same conclusion holds if the following two related problems can be computed in polynomial time with access to an NP-oracle: (i) approximately counting the number of accepted inputs of a circuit, up to multiplicative factors; and (ii) recognizing an approximate lower bound on the number of accepted inputs of a circuit, up to multiplicative factors. KeywordsApproximate cou...

Research paper thumbnail of Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses

Theory of Computing Systems

This paper presents the following results on sets that are complete for NP. 1. If there is a prob... more This paper presents the following results on sets that are complete for NP. 1. If there is a problem in NP that requires exponential time at almost all lengths, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 2. If there is a problem in coNP that cannot be solved by polynomial-size nondeterministic circuits, then every many-one complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 3. If there exist a one-way permutation that is secure against subexponential-size circuits and there is a hard tally language in NP intersect coNP, then there is a Turing complete language for NP that is not many-one complete. Our first two results use worst-case hardness hypotheses whereas earlier work that showed similar results relied on average-case or almost-everywhere hardness assumptions. The use of average-case and worst-case hypotheses in the last result is unique...

Research paper thumbnail of NP-Hard Sets Are Exponentially Dense Unless coNP C NP/poly

Proceedings of the Annual IEEE Conference on Computational Complexity

We show that hard sets S for NP must have exponential density, i.e. |S=n| ges 2nepsi for some isi... more We show that hard sets S for NP must have exponential density, i.e. |S=n| ges 2nepsi for some isin > 0 and infinitely many n, unless coNP sube NP/poly and the polynomial-time hierarchy collapses. This result holds for Turing reductions that make n1-isin queries. In addition we study the instance complexity o/NP- hard problems and show that hard sets also have an exponential amount of instances that have instance complexity n for some sigma > 0. This result also holds for Turing reductions that make n1-isin queries.

Research paper thumbnail of Unions of Disjoint NP-Complete Sets

ACM Transactions on Computation Theory

We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-compl... more We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-complete? We provide necessary and sufficient conditions under which the union of disjoint NP-complete sets remain complete.

Research paper thumbnail of Unions of Disjoint NP-Complete Sets

Lecture Notes in Computer Science, 2011

ABSTRACT We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B... more ABSTRACT We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-complete? We provide necessary and sufficient conditions under which the union of disjoint NP-complete sets remain complete.

Research paper thumbnail of Strong Reductions and Isomorphism of Complete Sets

Lecture Notes in Computer Science, 2007

We study the structure of the polynomial-time complete sets for NP and PSPACE under strong nondet... more We study the structure of the polynomial-time complete sets for NP and PSPACE under strong nondeterministic polynomial-time reductions (SNP-reductions). We show the following results.

Research paper thumbnail of Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds

Lecture Notes in Computer Science, 2011

ABSTRACT This paper extends and improves work of Fortnow and Klivans [5], who showed that if a ci... more ABSTRACT This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit class C\mathcal{C} has an efficient learning algorithm in Angluin’s model of exact learning via equivalence and membership queries [2], then we have the lower bound EXP NP Ë eq C\not\subseteq \mathcal{C}. We use entirely different techniques involving betting games [4] to remove the NP oracle and improve the lower bound to EXP Ë eq C\not\subseteq \mathcal{C}. This shows that it is even more difficult to design a learning algorithm for C\mathcal{C} than the results of Fortnow and Klivans indicated.

Research paper thumbnail of An Inclusive Framework for Conceptualizing Integrated Designs in Social and Behavioral Research

Research paper thumbnail of Single-case designs technical documentation

ABSTRACT In an effort to expand the pool of scientific evidence available for review, the What Wo... more ABSTRACT In an effort to expand the pool of scientific evidence available for review, the What Works Clearinghouse (WWC) assembled a panel of national experts in single-case design (SCD) and analysis to draft SCD Standards. SCDs are adaptations of interrupted time-series designs and can provide a rigorous experimental evaluation of intervention effects. SCDs can provide a strong basis for establishing causal inference, and these designs are widely used in applied and clinical disciplines in psychology and education, such as school psychology and the field of special education. In this paper, the panel provides an overview of SCDs, specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes SCD Standards to be implemented by the WWC. The Standards are bifurcated into Design and Evidence Standards. The Design Standards evaluate the internal validity of the design. Reviewers assign the categories of "Meets Standards," "Meets Standards with Reservations," and "Does not Meet Standards" to each study based on the Design Standards. Reviewers trained in visual analysis will then apply the Evidence Standards to studies that meet standards with or without reservations, resulting in the categorization of each outcome variable as demonstrating "Strong Evidence," "Moderate Evidence," or "No Evidence." Appended are: (1) Attrition Diagram; and (2) Visual Analysis. (Contains 1 table, 11 figures and 16 footnotes.)

Research paper thumbnail of 97 Scaled dimension and nonuniform complexity

Research paper thumbnail of Apparatus for microscopic detection of hardness

Research paper thumbnail of Learning Reductions to Sparse Sets

Lecture Notes in Computer Science, 2013

Research paper thumbnail of NP-Hard Sets Are Exponentially Dense Unless coNP C NP/poly

2008 23rd Annual IEEE Conference on Computational Complexity, 2008

We show that hard sets S for NP must have exponential density, i.e. |S =n | ≥ 2 n for some > 0 an... more We show that hard sets S for NP must have exponential density, i.e. |S =n | ≥ 2 n for some > 0 and infinitely many n, unless coNP ⊆ NP/poly and the polynomial-time hierarchy collapses. This result holds for Turing reductions that make n 1− queries.

Research paper thumbnail of Length-Increasing Reductions for PSPACE-Completeness

Lecture Notes in Computer Science, 2013

Research paper thumbnail of Kolmogorov Complexity in Randomness Extraction

ACM Transactions on Computation Theory, 2011

We clarify the role of Kolmogorov complexity in the area of randomness extraction. We show that a... more We clarify the role of Kolmogorov complexity in the area of randomness extraction. We show that a computable function is an almost randomness extractor if and only if it is a Kolmogorov complexity extractor, thus establishing a fundamental equivalence between two forms of extraction studied in the literature: Kolmogorov extraction and randomness extraction. We present a distribution M k based on Kolmogorov complexity that is complete for randomness extraction in the sense that a computable function is an almost randomness extractor if and only if it extracts randomness from M k . *

Research paper thumbnail of 5 What Is Credible Evidence in Education?: The Role of the What Works Clearinghouse in Informing the Process

What counts as credible evidence in applied research and evaluation practice?, 2009

Research paper thumbnail of Eective Strong Dimension, Algorithmic Information, and Computational Complexity

The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdor... more The two most important notions of fractal dimension are Hausdorff dimension, developed by Hausdorff (1919), and packing dimension, developed independently by Tricot (1982) and Sullivan (1984). Both dimensions have the mathematical advantage of being defined from measures, and both have yielded extensive applications in fractal geometry and dynamical systems.

Research paper thumbnail of Partial Bi-immunity, Scaled Dimension, and NP-Completeness

Theory of Computing Systems

The Turing and many-one completeness notions for NP have been previously separated under measure,... more The Turing and many-one completeness notions for NP have been previously separated under measure, genericity, and bi-immunity hypotheses on NP. The proofs of all these results rely on the existence of a language in NP with almost everywhere hardness. In this paper we separate the same NP-completeness notions under a partial bi-immunity hypothesis that is weaker and only yields a language in NP that is hard to solve on most strings. This improves the results of Lutz and Mayordomo (Theoretical Computer Science, 1996), Ambos-Spies and Bentzien (Journal of Computer and System Sciences, 2000), and Pavan and Selman (Information and Computation, 2004). The proof of this theorem is a significant departure from previous work. We also use this theorem to separate the NP-completeness notions under a scaled dimension hypothesis on NP.

Research paper thumbnail of Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds

ACM Transactions on Computation Theory

This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit cla... more This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit class C\mathcal{C} has an efficient learning algorithm in Angluin’s model of exact learning via equivalence and membership queries [2], then we have the lower bound EXP NP Ë eq C\not\subseteq \mathcal{C}. We use entirely different techniques involving betting games [4] to remove the NP oracle and improve the lower bound to EXP Ë eq C\not\subseteq \mathcal{C}. This shows that it is even more difficult to design a learning algorithm for C\mathcal{C} than the results of Fortnow and Klivans indicated.

Research paper thumbnail of Comparing reductions to NP-complete sets

Information and Computation

Under the assumption that NP does not have p-measure 0, we investigate reductions to NP-complete ... more Under the assumption that NP does not have p-measure 0, we investigate reductions to NP-complete sets and prove the following:(1)Adaptive reductions are more powerful than nonadaptive reductions: there is a problem that is Turing-complete for NP but not truth-table-complete.(2)Strong nondeterministic reductions are more powerful than deterministic reductions: there is a problem that is SNP-complete for NP but not Turing-complete.(3)Every problem that is many-one complete for NP is complete under length-increasing reductions that are computed by polynomial-size circuits.The first item solves one of Lutz and Mayordomo’s “Twelve Problems in Resource-Bounded Measure” (1999). We also show that every many-one complete problem for NE is complete under one-to-one, length-increasing reductions that are computed by polynomial-size circuits.

Research paper thumbnail of Derandomizing Arthur-Merlin Games and Approximate Counting Implies Exponential-Size Lower Bounds

Computational Complexity

We show that if Arthur-Merlin protocols can be derandomized, then there is a language computable ... more We show that if Arthur-Merlin protocols can be derandomized, then there is a language computable in deterministic exponential-time with access to an NP oracle that requires circuits of exponential size. More formally, if every promise problem in prAM, the class of promise problems that have Arthur-Merlin protocols, can be computed by a deterministic polynomial-time algorithm with access to an NP oracle, then there is a language in ENP that requires circuits of size Ω(2 n /n). The lower bound in the conclusion of our theorem suffices to construct pseudorandom generators with exponential stretch. We also show that the same conclusion holds if the following two related problems can be computed in polynomial time with access to an NP-oracle: (i) approximately counting the number of accepted inputs of a circuit, up to multiplicative factors; and (ii) recognizing an approximate lower bound on the number of accepted inputs of a circuit, up to multiplicative factors. KeywordsApproximate cou...

Research paper thumbnail of Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses

Theory of Computing Systems

This paper presents the following results on sets that are complete for NP. 1. If there is a prob... more This paper presents the following results on sets that are complete for NP. 1. If there is a problem in NP that requires exponential time at almost all lengths, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 2. If there is a problem in coNP that cannot be solved by polynomial-size nondeterministic circuits, then every many-one complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 3. If there exist a one-way permutation that is secure against subexponential-size circuits and there is a hard tally language in NP intersect coNP, then there is a Turing complete language for NP that is not many-one complete. Our first two results use worst-case hardness hypotheses whereas earlier work that showed similar results relied on average-case or almost-everywhere hardness assumptions. The use of average-case and worst-case hypotheses in the last result is unique...

Research paper thumbnail of NP-Hard Sets Are Exponentially Dense Unless coNP C NP/poly

Proceedings of the Annual IEEE Conference on Computational Complexity

We show that hard sets S for NP must have exponential density, i.e. |S=n| ges 2nepsi for some isi... more We show that hard sets S for NP must have exponential density, i.e. |S=n| ges 2nepsi for some isin > 0 and infinitely many n, unless coNP sube NP/poly and the polynomial-time hierarchy collapses. This result holds for Turing reductions that make n1-isin queries. In addition we study the instance complexity o/NP- hard problems and show that hard sets also have an exponential amount of instances that have instance complexity n for some sigma > 0. This result also holds for Turing reductions that make n1-isin queries.

Research paper thumbnail of Unions of Disjoint NP-Complete Sets

ACM Transactions on Computation Theory

We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-compl... more We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-complete? We provide necessary and sufficient conditions under which the union of disjoint NP-complete sets remain complete.

Research paper thumbnail of Unions of Disjoint NP-Complete Sets

Lecture Notes in Computer Science, 2011

ABSTRACT We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B... more ABSTRACT We study the following question: if A and B are disjoint NP-complete sets, then is A ∪ B NP-complete? We provide necessary and sufficient conditions under which the union of disjoint NP-complete sets remain complete.

Research paper thumbnail of Strong Reductions and Isomorphism of Complete Sets

Lecture Notes in Computer Science, 2007

We study the structure of the polynomial-time complete sets for NP and PSPACE under strong nondet... more We study the structure of the polynomial-time complete sets for NP and PSPACE under strong nondeterministic polynomial-time reductions (SNP-reductions). We show the following results.

Research paper thumbnail of Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds

Lecture Notes in Computer Science, 2011

ABSTRACT This paper extends and improves work of Fortnow and Klivans [5], who showed that if a ci... more ABSTRACT This paper extends and improves work of Fortnow and Klivans [5], who showed that if a circuit class C\mathcal{C} has an efficient learning algorithm in Angluin’s model of exact learning via equivalence and membership queries [2], then we have the lower bound EXP NP Ë eq C\not\subseteq \mathcal{C}. We use entirely different techniques involving betting games [4] to remove the NP oracle and improve the lower bound to EXP Ë eq C\not\subseteq \mathcal{C}. This shows that it is even more difficult to design a learning algorithm for C\mathcal{C} than the results of Fortnow and Klivans indicated.

Research paper thumbnail of An Inclusive Framework for Conceptualizing Integrated Designs in Social and Behavioral Research

Research paper thumbnail of Single-case designs technical documentation

ABSTRACT In an effort to expand the pool of scientific evidence available for review, the What Wo... more ABSTRACT In an effort to expand the pool of scientific evidence available for review, the What Works Clearinghouse (WWC) assembled a panel of national experts in single-case design (SCD) and analysis to draft SCD Standards. SCDs are adaptations of interrupted time-series designs and can provide a rigorous experimental evaluation of intervention effects. SCDs can provide a strong basis for establishing causal inference, and these designs are widely used in applied and clinical disciplines in psychology and education, such as school psychology and the field of special education. In this paper, the panel provides an overview of SCDs, specifies the types of questions that SCDs are designed to answer, and discusses the internal validity of SCDs. The panel then proposes SCD Standards to be implemented by the WWC. The Standards are bifurcated into Design and Evidence Standards. The Design Standards evaluate the internal validity of the design. Reviewers assign the categories of "Meets Standards," "Meets Standards with Reservations," and "Does not Meet Standards" to each study based on the Design Standards. Reviewers trained in visual analysis will then apply the Evidence Standards to studies that meet standards with or without reservations, resulting in the categorization of each outcome variable as demonstrating "Strong Evidence," "Moderate Evidence," or "No Evidence." Appended are: (1) Attrition Diagram; and (2) Visual Analysis. (Contains 1 table, 11 figures and 16 footnotes.)

Research paper thumbnail of 97 Scaled dimension and nonuniform complexity

Research paper thumbnail of Apparatus for microscopic detection of hardness

Research paper thumbnail of Learning Reductions to Sparse Sets

Lecture Notes in Computer Science, 2013

Research paper thumbnail of NP-Hard Sets Are Exponentially Dense Unless coNP C NP/poly

2008 23rd Annual IEEE Conference on Computational Complexity, 2008

We show that hard sets S for NP must have exponential density, i.e. |S =n | ≥ 2 n for some > 0 an... more We show that hard sets S for NP must have exponential density, i.e. |S =n | ≥ 2 n for some > 0 and infinitely many n, unless coNP ⊆ NP/poly and the polynomial-time hierarchy collapses. This result holds for Turing reductions that make n 1− queries.