Gate Elimination for Linear Functions and New Feebly Secure Constructions (original) (raw)

A Feebly Secure Trapdoor Function

2009

In 1992, A. Hiltgen [1] provided the first constructions of provably (slightly) secure cryptographic primitives, namely feebly one-way functions. These functions are provably harder to invert than to compute, but the complexity (viewed as circuit complexity over circuits with arbitrary binary gates) is amplified by a constant factor only (with the factor approaching 2). In traditional cryptography, one-way functions are the basic primitive of private-key and digital signature schemes, while public-key cryptosystems are constructed with trapdoor functions. We continue Hiltgen’s work by providing an example of a feebly trapdoor function where the adversary is guaranteed to spend more time than every honest participant by a constant factor of \(\frac{25}{22}\) .

More Constructions of Lossy and Correlation-Secure Trapdoor Functions

Journal of Cryptology, 2013

We propose new and improved instantiations of lossy trapdoor functions (Peikert and Waters, STOC '08), and correlation-secure trapdoor functions (Rosen and Segev, TCC '09). Our constructions widen the set of number-theoretic assumptions upon which these primitives can be based, and are summarized as follows:

Uniqueness is a different story: Impossibility of verifiable random functions from trapdoor permutations

2010

Verifiable random functions (VRFs), firstly proposed by Micali, Rabin, and Vadhan (FOCS 99), are pseudorandom functions with the additional property that the owner of the seed SK can issue publicly-verifiable proofs for the statements "f (SK , x) = y", for any input x. Moreover, the output of VRFs is guaranteed to be unique, which means that y = f (SK , x) is the only image that can be proven to map to x. Due to their properties, VRFs are a fascinating primitive that have found several theoretical and practical applications. However, despite their popularity, constructing VRFs seems to be a challenging task. Indeed only a few constructions based on specific number-theoretic problems are known and basing a scheme on a general assumption is still an open problem. Towards this direction, Brakerski, Goldwasser, Rothblum, and Vaikuntanathan (TCC 2009) recently showed that verifiable random functions cannot be constructed from one-way permutations in a black-box way.

Constant-Overhead Secure Computation of Boolean Circuits using Preprocessing

Theory of Cryptography, 2013

We present a protocol for securely computing a Boolean circuit C in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming a preprocessing functionality that is not given the inputs. For a large number of players the work for each player is the same as computing the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods only achieved constant error.

Threshold circuit lower bounds on cryptographic functions

2005

In this work, we are interested in non-trivial upper bounds on the spectral norm of binary matrices M from {−1, 1} N×N . It is known that the distributed Boolean function represented by M is hard to compute in various restricted models of computation if the spectral norm is bounded from above by N 1−ε , where ε > 0 denotes a fixed constant. For instance, the size of a two-layer threshold circuit (with polynomially bounded weights for the gates in the hidden layer, but unbounded weights for the output gate) grows exponentially fast with n := log N . We prove sufficient conditions on M that imply small spectral norms (and thus high computational complexity in restricted models). Our general results cover specific cases, where the matrix M represents a bit (the least significant bit or other fixed bits) of fundamental functions. Functions like the discrete multiplication and division, as well as cryptographic functions such as the Diffie-Hellman function (IEEE Trans. Inform. Theory 22(6) (1976) 644-654) and the decryption functions of the Pointcheval (Advances in can be addressed by our technique. In order to obtain our results, we make a detour on exponential sums and on spectral norms of matrices with complex entries. This method might be considered interesting in its own right.

Exact Logic Minimization and Multiplicative Complexity of Concrete Algebraic and Cryptographic Circuits

International journal on advances in intelligent systems, 2013

Two very important NP-hard problems in the area of computational complexity are the problems of Matrix Multiplication (MM) and Circuit Optimization. Solving particular cases of such problems yield to improvements in many other problems as they are core sub-routines implemented in many other algorithms. However, obtaining optimal solutions is an intractable problem since the space to explore for each problem is exponentially large. All suggested methodologies rely on wellchosen heuristics, selected according to the topology of the specific problem. Such heuristics may yield to efficient and acceptable solutions but they do not guarantee that no better can be done. In this paper, we suggest a general framework for obtaining solutions to such problems. We have developed a 2-step methodology, where in the first place we describe algebraically the problem and then we convert it to a SAT-CNF problem, which we solve using SAT-solvers. By running the same procedure for different values of k...

Block ciphers, pseudorandom functions, and Natural Proofs

This paper takes a new step towards closing the troubling gap between pseudorandom functions (PRF) and their popular, bounded-input-length counterpart: block ciphers. This gap is both quantitative, because block-ciphers are more efficient than PRF in various ways, and methodological, because block-ciphers usually fit in the substitution-permutation network paradigm (SPN) which has no counterpart in PRF. We give several candidate PRF F i that are inspired by the SPN paradigm. This paradigm involves a "substitution function" (S-box). Our main candidates are: F 1 : {0, 1} n → {0, 1} n is an SPN whose S-box is a random function on b = O(lg n) bits, given as part of the seed. We prove unconditionally that F 1 resists attacks that run in time ≤ 2 ǫb. Setting b = ω(lg n) we obtain an inefficient PRF, which however seems to be the first such construction using the SPN paradigm. F 2 : {0, 1} n → {0, 1} n is an SPN where the S-box is (patched) field inversion, a common choice in block ciphers. F 2 is computable with Boolean circuits of size n•log O(1) n, and in particular with seed length n•log O(1) n. We prove that this candidate has exponential security 2 Ω(n) against linear and differential cryptanalysis. F 3 : {0, 1} n → {0, 1} is a non-standard variant on the SPN paradigm, where "states" grow in length. F 3 is computable with size n 1+ǫ , for any ǫ > 0, in the restricted circuit class TC 0 of unbounded fan-in majority circuits of constant-depth. We prove that F 3 is almost 3-wise independent. F 4 : {0, 1} n → {0, 1} uses an extreme setting of the SPN parameters (one round, one S-box, no diffusion matrix). The S-box is again (patched) field inversion. We prove that this candidate is a small-bias generator (for tests of weight up to 2 0.9n). Assuming the security of our candidates, our work also narrows the gap between the"Natural Proofs barrier" [Razborov & Rudich; JCSS '97] and existing lower bounds, in three models: unbounded-depth circuits, TC 0 circuits, and Turing machines. In particular, the efficiency of the circuits computing F 3 is related to a result by Allender and Koucky [JACM '10] who show that a lower bound for such circuits would imply a lower bound for TC 0 .

Algebraic Immunity for Cryptographically Significant Boolean Functions: Analysis and Construction

IEEE Transactions on Information Theory, 2006

Recently, algebraic attacks have received a lot of attention in the cryptographic literature. It has been observed that a Boolean function used as a cryptographic primitive, and interpreted as a multivariate polynomial over 2 , should not have low degree multiples obtained by multiplication with low degree nonzero functions. In this paper, we show that a Boolean function having low nonlinearity is (also) weak against algebraic attacks, and we extend this result to higher order nonlinearities. Next, we present enumeration results on linearly independent annihilators. We also study certain classes of highly nonlinear resilient Boolean functions for their algebraic immunity. We identify that functions having low-degree subfunctions are weak in terms of algebraic immunity, and we analyze some existing constructions from this viewpoint. Further, we present a construction method to generate Boolean functions on variables with highest possible algebraic immunity 2 (this construction, first presented at the 2005 Workshop on Fast Software Encryption (FSE 2005), has been the first one producing such functions). These functions are obtained through a doubly indexed recursive relation. We calculate their Hamming weights and deduce their nonlinearities; we show that they have very high algebraic degrees. We express them as the sums of two functions which can be obtained from simple symmetric functions by a transformation which can be implemented with an algorithm whose complexity is linear in the number of variables. We deduce a very fast way of computing the output to these functions, given their input.

New Techniques for Efficient Trapdoor Functions and Applications

Advances in Cryptology – EUROCRYPT 2019, 2019

We develop techniques for constructing trapdoor functions (TDFs) with short image size and advanced security properties. Our approach builds on the recent framework of Garg and Hajiabadi [CRYPTO 2018]. As applications of our techniques, we obtain • The first construction of deterministic-encryption schemes for block-source inputs (both for the CPA and CCA cases) based on the Computational Diffie-Hellman (CDH) assumption. Moreover, by applying our efficiency-enhancing techniques, we obtain CDH-based schemes with ciphertext size linear in plaintext size. • The first construction of lossy TDFs based on the Decisional Diffie-Hellman (DDH) assumption with image size linear in input size, while retaining the lossiness rate of [Peikert-Waters STOC 2008]. Prior to our work, all constructions of deterministic encryption based even on the stronger DDH assumption incurred a quadratic gap between the ciphertext and plaintext sizes. Moreover, all DDH-based constructions of lossy TDFs had image size quadratic in the input size. At a high level, we break the previous quadratic barriers by introducing a novel technique for encoding input bits via hardcore output bits with the use of erasure-resilient codes. All previous schemes used group elements for encoding input bits, resulting in quadratic expansions.