An LLL-Reduction Algorithm with Quasi-Linear Time Complexity (original) (raw)
Related papers
Towards an efficient lattice basis reduction implementation
The security of most digital systems is under serious threats due to major technology breakthroughs we are experienced in nowadays. Lattice-based cryptosystems are one of the most promising post-quantum types of cryptography, since it is believed to be secure against quantum computer attacks. Their security is based on the hardness of the Shortest Vector Problem and Closest Vector Problem. Lattice basis reduction algorithms are used in several fields, such as lattice-based cryptography and signal processing. They aim to make the problem easier to solve by obtaining shorter and more orthogonal basis. Some case studies work with numbers with hundreds of digits to ensure harder problems, which require Multiple Precision (MP) arithmetic. This dissertation presents a novel integer representation for MP arithmetic and the algorithms for the associated operations, MpIM. It also compares these implementations with other libraries, such as GNU Multiple Precision Arithmetic Library, where our experimental results display a similar performance and for some operations better performances. This dissertation also describes a novel lattice basis reduction module, LattBRed, which included a novel efficient implementation of the Qiao’s Jacobi method, a Lenstra-Lenstra-Lovász (LLL) algorithm and associated parallel implementations, a parallel variant of the Block Korkine-Zolotarev (BKZ) algorithm and its implementation and MP versions of the the Qiao’s Jacobi method, the LLL and BKZ algorithms. Experimental performances measurements with the set of implemented modifications of the Qiao’s Jacobi method show some performance improvements and some degradations but speedups greater than 100 in Ajtai-type bases.
Low-dimensional lattice basis reduction revisited
ACM Transactions on Algorithms, 2009
Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are two-fold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm. 2 · P. Q. Nguyen and D. Stehlé 2008; and in practice for high-dimensional lattices are based on a repeated use of low-dimensional HKZ-reduction.
An Experimental Comparison of Some LLL-Type Lattice Basis Reduction Algorithms
International Journal of Applied and Computational Mathematics, 2015
In this paper we experimentally compare the performance of the L 2 lattice basis reduction algorithm, whose importance recently became evident, with our own Gram-based lattice basis reduction algorithm, which is a variant of the Schnorr-Euchner algorithm. We conclude with observations about the algorithms under investigation for lattice basis dimensions up to the theoretical limit. We also reexamine the notion of "buffered transformations" and its impact on performance of lattice basis reduction algorithms. We experimentally compare four different algorithms directly in the Sage Mathematics Software: our own algorithm, the L 2 algorithm and "buffered" versions of them resulting in a total of four algorithms.
An LLL Algorithm with Quadratic Complexity
SIAM Journal on Computing, 2009
The Lenstra-Lenstra-Lovász lattice basis reduction algorithm (called LLL or L 3 ) is a fundamental tool in computational number theory and theoretical computer science, which can be viewed as an efficient algorithmic version of Hermite's inequality on Hermite's constant. Given an integer d-dimensional lattice basis with vectors of Euclidean norm less than B in an ndimensional space, the L 3 algorithm outputs a reduced basis in O(d 3 n log B · M(d log B)) bit operations, where M(k) denotes the time required to multiply k-bit integers. This worst-case complexity is problematic for applications where d or/and log B are often large. As a result, the original L 3 algorithm is almost never used in practice, except in tiny dimension. Instead, one applies floating-point variants where the long-integer arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst case: the usual floating-point L 3 algorithm is not even guaranteed to terminate, and the output basis may not be L 3 -reduced at all. In this article, we introduce the L 2 algorithm, a new and natural floatingpoint variant of the L 3 algorithm which provably outputs L 3 -reduced bases in polynomial time O(d 2 n(d + log B) log B · M(d)). This is the first L 3 algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like Euclid's gcd algorithm and Lagrange's two-dimensional algorithm.
Towards Faster Polynomial-Time Lattice Reduction
Lecture Notes in Computer Science, 2021
The lll algorithm is a polynomial-time algorithm for reducing d-dimensional lattice with exponential approximation factor. Currently, the most efficient variant of lll, by Neumaier and Stehlé, has a theoretical running time in d 4 •B 1+o(1) where B is the bitlength of the entries, but has never been implemented. This work introduces new asymptotically fast, parallel, yet heuristic, reduction algorithms with their optimized implementations. Our algorithms are recursive and fully exploit fast matrix multiplication. We experimentally demonstrate that by carefully controlling the floating-point precision during the recursion steps, we can reduce euclidean lattices of rank d in timeÕ(d ω • C), i.e., almost a constant number of matrix multiplications, where ω is the exponent of matrix multiplication and C is the log of the condition number of the matrix. For cryptographic applications, C is close to B, while it can be up to d times larger in the worst case. It improves the running-time of the state-of-the-art implementation fplll by a multiplicative factor of order d 2 • B. Further, we show that we can reduce structured lattices, the socalled knapsack lattices, in timeÕ(d ω−1 •C) with a progressive reduction strategy. Besides allowing reducing huge lattices, our implementation can break several instances of Fully Homomorphic Encryption schemes based on large integers in dimension 2,230 with 4 millions of bits.
Gradual sub-lattice reduction and a new complexity for factoring polynomials
We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.
A complexity analysis of a Jacobi method for lattice basis reduction
Proceedings of the Fifth International C* Conference on Computer Science and Software Engineering - C3S2E '12, 2012
The famous LLL algorithm is the first polynomial time lattice reduction algorithm which is widely used in many applications. In this paper, we prove the convergence of a novel polynomial time lattice reduction algorithm, called the Jacobi method introduced by S. Qiao [23], and show that it has the same complexity as the LLL algorithm. Our experimental results show that the Jacobi method outperforms the LLL algorithm in not only efficiency, but also orthogonality defect of the bases it produces.
A Polynomial Time Jacobi Method for Lattice Basis Reduction
2012
Among all lattice reduction algorithms, the LLL algorithm is the first and perhaps the most famous polynomial time algorithm, and it is widely used in many applications. In 2012, S. Qiao [24] introduced another algorithm, the Jacobi method, for lattice basis reduction. S. Qiao and Z. Tian [25] improved the Jacobi method further to be polynomial time but only produces a Quasi-Reduced basis. In this paper, we present a polynomial time Jacobi method for lattice basis reduction (short as Poly-Jacobi method) that can produce a reduced basis. Our experimental results indicate that the bases produced by Poly-Jacobi method have almost equally good orthogonality defect as the bases produced by the Jacobi method.
Computing a Lattice Basis Revisited
Proceedings of the 2019 on International Symposium on Symbolic and Algebraic Computation, 2019
Given (a, b) ∈ Z 2 , Euclid's algorithm outputs the generator gcd(a, b) of the ideal aZ + bZ. Computing a lattice basis is a high-dimensional generalization: given a 1 ,. .. , a n ∈ Z m , find a Z-basis of the lattice L = { n i=1 x i a i , x i ∈ Z} generated by the a i 's. The fastest algorithms known are HNF algorithms, but are not adapted to all applications, such as when the output should not be much longer than the input. We present an algorithm which extracts such a short basis within the same time as an HNF, by reduction to HNF. We also present an HNF-less algorithm, which reduces to Euclid's extended algorithm and can be generalized to quadratic forms. Both algorithms can extend primitive sets into bases.