A complexity analysis of a Jacobi method for lattice basis reduction (original) (raw)

A Polynomial Time Jacobi Method for Lattice Basis Reduction

2012

Among all lattice reduction algorithms, the LLL algorithm is the first and perhaps the most famous polynomial time algorithm, and it is widely used in many applications. In 2012, S. Qiao [24] introduced another algorithm, the Jacobi method, for lattice basis reduction. S. Qiao and Z. Tian [25] improved the Jacobi method further to be polynomial time but only produces a Quasi-Reduced basis. In this paper, we present a polynomial time Jacobi method for lattice basis reduction (short as Poly-Jacobi method) that can produce a reduced basis. Our experimental results indicate that the bases produced by Poly-Jacobi method have almost equally good orthogonality defect as the bases produced by the Jacobi method.

An Experimental Comparison of Some LLL-Type Lattice Basis Reduction Algorithms

International Journal of Applied and Computational Mathematics, 2015

In this paper we experimentally compare the performance of the L 2 lattice basis reduction algorithm, whose importance recently became evident, with our own Gram-based lattice basis reduction algorithm, which is a variant of the Schnorr-Euchner algorithm. We conclude with observations about the algorithms under investigation for lattice basis dimensions up to the theoretical limit. We also reexamine the notion of "buffered transformations" and its impact on performance of lattice basis reduction algorithms. We experimentally compare four different algorithms directly in the Sage Mathematics Software: our own algorithm, the L 2 algorithm and "buffered" versions of them resulting in a total of four algorithms.

Towards an efficient lattice basis reduction implementation

The security of most digital systems is under serious threats due to major technology breakthroughs we are experienced in nowadays. Lattice-based cryptosystems are one of the most promising post-quantum types of cryptography, since it is believed to be secure against quantum computer attacks. Their security is based on the hardness of the Shortest Vector Problem and Closest Vector Problem. Lattice basis reduction algorithms are used in several fields, such as lattice-based cryptography and signal processing. They aim to make the problem easier to solve by obtaining shorter and more orthogonal basis. Some case studies work with numbers with hundreds of digits to ensure harder problems, which require Multiple Precision (MP) arithmetic. This dissertation presents a novel integer representation for MP arithmetic and the algorithms for the associated operations, MpIM. It also compares these implementations with other libraries, such as GNU Multiple Precision Arithmetic Library, where our experimental results display a similar performance and for some operations better performances. This dissertation also describes a novel lattice basis reduction module, LattBRed, which included a novel efficient implementation of the Qiao’s Jacobi method, a Lenstra-Lenstra-Lovász (LLL) algorithm and associated parallel implementations, a parallel variant of the Block Korkine-Zolotarev (BKZ) algorithm and its implementation and MP versions of the the Qiao’s Jacobi method, the LLL and BKZ algorithms. Experimental performances measurements with the set of implemented modifications of the Qiao’s Jacobi method show some performance improvements and some degradations but speedups greater than 100 in Ajtai-type bases.

Low-dimensional lattice basis reduction revisited

ACM Transactions on Algorithms, 2009

Lattice reduction is a geometric generalization of the problem of computing greatest common divisors. Most of the interesting algorithmic problems related to lattice reduction are NP-hard as the lattice dimension increases. This article deals with the low-dimensional case. We study a greedy lattice basis reduction algorithm for the Euclidean norm, which is arguably the most natural lattice basis reduction algorithm, because it is a straightforward generalization of an old two-dimensional algorithm of Lagrange, usually known as Gauss' algorithm, and which is very similar to Euclid's gcd algorithm. Our results are two-fold. From a mathematical point of view, we show that up to dimension four, the output of the greedy algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice dimension is strictly higher than four, the output basis may be arbitrarily bad as it may not even reach the first minimum. More importantly, from a computational point of view, we show that up to dimension four, the bit-complexity of the greedy algorithm is quadratic without fast integer arithmetic, just like Euclid's gcd algorithm. This was already proved by Semaev up to dimension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in dimension four. We propose two different analyzes: a global approach based on the geometry of the current basis when the length decrease stalls, and a local approach showing directly that a significant length decrease must occur every O(1) consecutive steps. Our analyzes simplify Semaev's analysis in dimensions two and three, and unify the cases of dimensions two to four. Although the global approach is much simpler, we also present the local approach because it gives further information on the behavior of the algorithm. 2 · P. Q. Nguyen and D. Stehlé 2008; and in practice for high-dimensional lattices are based on a repeated use of low-dimensional HKZ-reduction.

Techniques in Lattice Basis Reduction

2017

The credit on {\it reduction theory} goes back to the work of Lagrange, Gauss, Hermite, Korkin, Zolotarev, and Minkowski. Modern reduction theory is voluminous and includes the work of A. Lenstra, H. Lenstra and L. Lovasz who created the well known LLL algorithm, and many other researchers such as L. Babai and C. P. Schnorr who created significant new variants of basis reduction algorithms. In this paper, we propose and investigate the efficacy of new optimization techniques to be used along with LLL algorithm. The techniques we have proposed are: i) {\it hill climbing (HC)}, ii) {\it lattice diffusion-sub lattice fusion (LDSF)}, and iii) {\it multistage hybrid LDSF-HC}. The first technique relies on the sensitivity of LLL to permutations of the input basis BBB, and optimization ideas over the symmetric group SmS_mSm viewed as a metric space. The second technique relies on partitioning the lattice into sublattices, performing basis reduction in the partition sublattice blocks, fusing ...

A Complete Analysis of the BKZ Lattice Reduction Algorithm

IACR Cryptol. ePrint Arch., 2020

We present the first rigorous dynamic analysis of BKZ, the most widely used lattice reduction algorithm besides LLL: previous analyses were either heuristic or only applied to variants of BKZ. Namely, we provide guarantees on the quality of the current lattice basis during execution. Our analysis extends to a generic BKZ algorithm where the SVP-oracle is replaced by an approximate oracle and/or the basis update is not necessarily performed by LLL. Interestingly, it also provides quantitative improvements, such as better and simpler bounds for both the output quality and the running time. As an application, we observe that in certain approximation regimes, it is more efficient to use BKZ with an approximate rather than exact SVP-oracle.

Towards Faster Polynomial-Time Lattice Reduction

Lecture Notes in Computer Science, 2021

The lll algorithm is a polynomial-time algorithm for reducing d-dimensional lattice with exponential approximation factor. Currently, the most efficient variant of lll, by Neumaier and Stehlé, has a theoretical running time in d 4 •B 1+o(1) where B is the bitlength of the entries, but has never been implemented. This work introduces new asymptotically fast, parallel, yet heuristic, reduction algorithms with their optimized implementations. Our algorithms are recursive and fully exploit fast matrix multiplication. We experimentally demonstrate that by carefully controlling the floating-point precision during the recursion steps, we can reduce euclidean lattices of rank d in timeÕ(d ω • C), i.e., almost a constant number of matrix multiplications, where ω is the exponent of matrix multiplication and C is the log of the condition number of the matrix. For cryptographic applications, C is close to B, while it can be up to d times larger in the worst case. It improves the running-time of the state-of-the-art implementation fplll by a multiplicative factor of order d 2 • B. Further, we show that we can reduce structured lattices, the socalled knapsack lattices, in timeÕ(d ω−1 •C) with a progressive reduction strategy. Besides allowing reducing huge lattices, our implementation can break several instances of Fully Homomorphic Encryption schemes based on large integers in dimension 2,230 with 4 millions of bits.

Gradual sub-lattice reduction and a new complexity for factoring polynomials

We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.

Computing a Lattice Basis Revisited

Proceedings of the 2019 on International Symposium on Symbolic and Algebraic Computation, 2019

Given (a, b) ∈ Z 2 , Euclid's algorithm outputs the generator gcd(a, b) of the ideal aZ + bZ. Computing a lattice basis is a high-dimensional generalization: given a 1 ,. .. , a n ∈ Z m , find a Z-basis of the lattice L = { n i=1 x i a i , x i ∈ Z} generated by the a i 's. The fastest algorithms known are HNF algorithms, but are not adapted to all applications, such as when the output should not be much longer than the input. We present an algorithm which extracts such a short basis within the same time as an HNF, by reduction to HNF. We also present an HNF-less algorithm, which reduces to Euclid's extended algorithm and can be generalized to quadratic forms. Both algorithms can extend primitive sets into bases.