Alexander Vardy - Academia.edu (original) (raw)
Papers by Alexander Vardy
We consider hard-decision decoders for product codes over the erasure channel, which iteratively ... more We consider hard-decision decoders for product codes over the erasure channel, which iteratively employ rounds of decoding rows and columns alternately. We derive the exact asymptotic probability of decoding failure as a function of the error-correction capabilities of the row and column codes, the number of decoding rounds, and the channel erasure probability. We examine both the case of codes capable of correcting a constant amount of errors, and the case of codes capable of correcting a constant fraction of their length.
Å. IntËodÎction Lexicographic codes, or lexicodes for short, were introduced by Conway and Sloane... more Å. IntËodÎction Lexicographic codes, or lexicodes for short, were introduced by Conway and Sloane in [3, 4] as algebraic codes with surprisingly good parameters. Binary lexicodes include, among other famous optimal codes, theHamming codes, theGolay code, and certainquadratic residue codes [4, 8]. Several authors [2, 4] have proved that lexicodes are always linear. Comparison with optimal linear codes of the same length and dimension [4] shows that lexicodes are usually within one of the optimal minimum distance. Hence, ...
We study bi-infinite sequences x = (x k ) k∈Z over the alphabet {0, 1, . . . , q−1}, for an arbit... more We study bi-infinite sequences x = (x k ) k∈Z over the alphabet {0, 1, . . . , q−1}, for an arbitrary q 2, that satisfy the following q-ary ghost pulse (qGP) constraint: for all k, l, m ∈ Z such that x k , x l , xm are nonzero and equal, x k+l−m is also nonzero. This constraint arises in the context of coding to combat the formation of spurious "ghost" pulses in high data-rate communication over an optical fiber. We show using techniques from Ramsey theory that if x satisfies the qGP constraint, then the support of x is a disjoint union of cosets of a subgroup kZ of Z and a set of zero density.
... Hirakendu Das and Alexander Vardy Department of Electrical Engineering, University of Califor... more ... Hirakendu Das and Alexander Vardy Department of Electrical Engineering, University of California San Diego, La Jolla, CA 92093, USA {hdas@ucsd ... Koetter and Vardy [6] later extended this work to an algebraic soft-decision decoding (ASD) algorithm for Reed-Solomon codes. ...
Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell ca... more Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.
Several new applications and a number of new mathematical techniques have increased the research ... more Several new applications and a number of new mathematical techniques have increased the research on errorcorrecting codes in the Lee metric in the last decade. In this work we consider several coding problems and constructions of error-correcting codes in the Lee metric. First, we consider constructions of dense error-correcting codes in relatively small dimensions over small alphabets. The second problem we solve is construction of diametric perfect codes with minimum distance four. We will construct such codes over various lengths and alphabet sizes. The third problem is to transfer an n-dimensional Lee sphere with large radius into a shape, with the same volume, located in a relatively small box. Hadamard matrices play an essential role in the solutions for all three problems. A construction of codes based on Hadamard matrices will start our discussion. These codes approach the sphere packing bound for very high rate range and appear to be the best known codes over some sets of parameters.
A new construction for constant weight codes is presented. The codes are constructed from k-dimen... more A new construction for constant weight codes is presented. The codes are constructed from k-dimensional subspaces of the vector space F n q . These subspaces form a constant dimension code in a Grassmannian. Some of the constructed codes are optimal constant weight codes with parameters not known before. An efficient algorithm for error-correction is given for these codes. If the constant dimension code has an efficient encoding and decoding algorithm then also the constructed constant weight code has an efficient encoding and decoding algorithm.
The q-analogs of basic designs are discussed. It is proved that the existence of any unknown Stei... more The q-analogs of basic designs are discussed. It is proved that the existence of any unknown Steiner structures, the q-analogs of Steiner systems, implies the existence of unknown Steiner systems. Optimal q-analogs covering designs are presented. Some lower and upper bounds on the sizes of q-analogs covering designs are proved.
The product code of two Reed-Solomon codes can be regarded as an evaluation codes of bivariate po... more The product code of two Reed-Solomon codes can be regarded as an evaluation codes of bivariate polynomials, whose degrees in each variable are bounded. We propose to decode these codes with a generalization of the Guruswami-Sudan interpolation-based list decoding algorithm. A relative decoding radius of 1 ¡ 6 p 4R is found, where is the rate of the product code.
The capacity-achieving property of polar codes has garnered much recent research attention result... more The capacity-achieving property of polar codes has garnered much recent research attention resulting in low-complexity and high-throughput hardware and software decoders. It would be desirable to implement flexible hardware for polar encoders and decoders that can implement polar codes of different lengths and rates, however this topic has not been studied in depth yet. Flexibility is of significant importance as it enables the communications system to adapt to varying channel conditions and is mandated in most communication standards. In this work, we describe a low-complexity and flexible systematic-encoding algorithm, prove its correctness, and use it as basis for encoder implementations capable of encoding any polar code up to a maximum length. We also investigate hardware and software implementations of decoders, describing how to implement flexible decoders that can decode any polar code up to a given length with little overhead and minor impact on decoding latency compared to...
Polar codes asymptotically achieve the symmetric capacity of memoryless channels, yet their error... more Polar codes asymptotically achieve the symmetric capacity of memoryless channels, yet their error-correcting performance under successive-cancellation (SC) decoding for short and moderate length codes is worse than that of other modern codes such as low-density parity-check (LDPC) codes. Of the many methods to improve the error-correction performance of polar codes, list decoding yields the best results, especially when the polar code is concatenated with a cyclic redundancy check (CRC). List decoding involves exploring several decoding paths with SC decoding, and therefore tends to be slower than SC decoding itself, by an order of magnitude in practical implementations. This is the second in a two-part series of papers on unrolled polar decoders. Part I focuses on hardware SC polar decoders. In this paper (Part II), we present a new algorithm based on unrolling the decoding tree of the code that improves the speed of list decoding by an order of magnitude when implemented in softwa...
Two mappings in a finite field, the Frobenius mapping and the cyclic shift mapping, are applied o... more Two mappings in a finite field, the Frobenius mapping and the cyclic shift mapping, are applied on lines in PG($n,p$) or codes in the Grassmannian, to form automorphisms groups in the Grassmanian and in its codes. These automorphisms are examined on two classical coding problems in the Grassmannian. The first is the existence of a parallelism with lines in the related projective geometry and the second is the existence of a Steiner structure. A computer search was applied to find parallelisms and codes. A new parallelism of lines in PG(5,3) was formed. A parallelism with these parameters was not known before. A large code which is only slightly short of a Steiner structure was formed.
Journal of Signal Processing Systems
The recently-discovered polar codes are seen as a major breakthrough in coding theory; they prova... more The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low complexity successive cancellation (SC) decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for SC polar decoders. We show that such decoders can be implemented with O(n) processing elements, O(n) memory elements, and can provide a constant throughput for a given target clock frequency. Furthermore, we show that SC decoding can be implemented in the logarithm domain, thereby eliminating costly multiplication and division operations and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC decoder and provide logic synthesis results confirming the linear growth in complexity of the decoder as the code length increases.
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011
The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. The... more The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. These codes achieve the capacity of many important channels under successive cancellation decoding. Motivated by the rapid progress in the theory of polar codes, we propose a family of architectures for efficient hardware implementation of successive cancellation decoders. We show that such decoders can be implemented with O(n) processing elements and O(n) memory elements, while providing constant throughput. We also propose a technique for overlapping the decoding of several consecutive codewords, thereby achieving a significant speed-up factor. We furthermore show that successive cancellation decoding can be implemented in the logarithmic domain, thereby eliminating the multiplication and division operations and greatly reducing the complexity of each processing element.
Linear Algebra and its Applications, 2013
The projective space of order n over the finite field F q , denoted here as P q (n), is the set o... more The projective space of order n over the finite field F q , denoted here as P q (n), is the set of all subspaces of the vector space F n q . The projective space can be endowed with distance function d S (X, Y ) = dim(X) + dim(Y ) − 2 dim(X ∩ Y ) which turns P q (n) into a metric space. With this, an (n, M, d) code C in projective space is a subset of P q (n) of size M such that the distance between any two codewords (subspaces) is at least d. Koetter and Kschischang recently showed that codes in projective space are precisely what is needed for error-correction in networks: an (n, M, d) code can correct t packet errors and ρ packet erasures introduced (adversarially) anywhere in the network as long as 2t + 2ρ < d. This motivates new interest in such codes.
Lecture Notes in Computer Science, 1993
A construction of perfect binary codes is presented. It is shown that this construction gives ris... more A construction of perfect binary codes is presented. It is shown that this construction gives rise to perfect codes that are nonequivalent to any of the previously known perfect codes. Furthermore, perfect codes C 1 and C 2 are constructed such that their intersection C 1C 2 has the maximum possible cardinality. The latter result is then employed to explicitly construct 22cn nonequivalent perfect codes of length n, for sufficiently large n and some constant c slightly less than 0.5.
We consider hard-decision decoders for product codes over the erasure channel, which iteratively ... more We consider hard-decision decoders for product codes over the erasure channel, which iteratively employ rounds of decoding rows and columns alternately. We derive the exact asymptotic probability of decoding failure as a function of the error-correction capabilities of the row and column codes, the number of decoding rounds, and the channel erasure probability. We examine both the case of codes capable of correcting a constant amount of errors, and the case of codes capable of correcting a constant fraction of their length.
Å. IntËodÎction Lexicographic codes, or lexicodes for short, were introduced by Conway and Sloane... more Å. IntËodÎction Lexicographic codes, or lexicodes for short, were introduced by Conway and Sloane in [3, 4] as algebraic codes with surprisingly good parameters. Binary lexicodes include, among other famous optimal codes, theHamming codes, theGolay code, and certainquadratic residue codes [4, 8]. Several authors [2, 4] have proved that lexicodes are always linear. Comparison with optimal linear codes of the same length and dimension [4] shows that lexicodes are usually within one of the optimal minimum distance. Hence, ...
We study bi-infinite sequences x = (x k ) k∈Z over the alphabet {0, 1, . . . , q−1}, for an arbit... more We study bi-infinite sequences x = (x k ) k∈Z over the alphabet {0, 1, . . . , q−1}, for an arbitrary q 2, that satisfy the following q-ary ghost pulse (qGP) constraint: for all k, l, m ∈ Z such that x k , x l , xm are nonzero and equal, x k+l−m is also nonzero. This constraint arises in the context of coding to combat the formation of spurious "ghost" pulses in high data-rate communication over an optical fiber. We show using techniques from Ramsey theory that if x satisfies the qGP constraint, then the support of x is a disjoint union of cosets of a subgroup kZ of Z and a set of zero density.
... Hirakendu Das and Alexander Vardy Department of Electrical Engineering, University of Califor... more ... Hirakendu Das and Alexander Vardy Department of Electrical Engineering, University of California San Diego, La Jolla, CA 92093, USA {hdas@ucsd ... Koetter and Vardy [6] later extended this work to an algebraic soft-decision decoding (ASD) algorithm for Reed-Solomon codes. ...
Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell ca... more Flash memory is a non-volatile computer memory comprised of blocks of cells, wherein each cell can take on q different levels corresponding to the number of electrons it contains. Increasing the cell level is easy; however, reducing a cell level forces all the other cells in the same block to be erased. This erasing operation is undesirable and therefore has to be used as infrequently as possible. We consider the problem of designing codes for this purpose, where k bits are stored using a block of n cells with q levels each. The goal is to maximize the number of bit writes before an erase operation is required. We present an efficient construction of codes that can store an arbitrary number of bits. Our construction can be viewed as an extension to multiple dimensions of the earlier work of Jiang and Bruck, where single-dimensional codes that can store only 2 bits were proposed.
Several new applications and a number of new mathematical techniques have increased the research ... more Several new applications and a number of new mathematical techniques have increased the research on errorcorrecting codes in the Lee metric in the last decade. In this work we consider several coding problems and constructions of error-correcting codes in the Lee metric. First, we consider constructions of dense error-correcting codes in relatively small dimensions over small alphabets. The second problem we solve is construction of diametric perfect codes with minimum distance four. We will construct such codes over various lengths and alphabet sizes. The third problem is to transfer an n-dimensional Lee sphere with large radius into a shape, with the same volume, located in a relatively small box. Hadamard matrices play an essential role in the solutions for all three problems. A construction of codes based on Hadamard matrices will start our discussion. These codes approach the sphere packing bound for very high rate range and appear to be the best known codes over some sets of parameters.
A new construction for constant weight codes is presented. The codes are constructed from k-dimen... more A new construction for constant weight codes is presented. The codes are constructed from k-dimensional subspaces of the vector space F n q . These subspaces form a constant dimension code in a Grassmannian. Some of the constructed codes are optimal constant weight codes with parameters not known before. An efficient algorithm for error-correction is given for these codes. If the constant dimension code has an efficient encoding and decoding algorithm then also the constructed constant weight code has an efficient encoding and decoding algorithm.
The q-analogs of basic designs are discussed. It is proved that the existence of any unknown Stei... more The q-analogs of basic designs are discussed. It is proved that the existence of any unknown Steiner structures, the q-analogs of Steiner systems, implies the existence of unknown Steiner systems. Optimal q-analogs covering designs are presented. Some lower and upper bounds on the sizes of q-analogs covering designs are proved.
The product code of two Reed-Solomon codes can be regarded as an evaluation codes of bivariate po... more The product code of two Reed-Solomon codes can be regarded as an evaluation codes of bivariate polynomials, whose degrees in each variable are bounded. We propose to decode these codes with a generalization of the Guruswami-Sudan interpolation-based list decoding algorithm. A relative decoding radius of 1 ¡ 6 p 4R is found, where is the rate of the product code.
The capacity-achieving property of polar codes has garnered much recent research attention result... more The capacity-achieving property of polar codes has garnered much recent research attention resulting in low-complexity and high-throughput hardware and software decoders. It would be desirable to implement flexible hardware for polar encoders and decoders that can implement polar codes of different lengths and rates, however this topic has not been studied in depth yet. Flexibility is of significant importance as it enables the communications system to adapt to varying channel conditions and is mandated in most communication standards. In this work, we describe a low-complexity and flexible systematic-encoding algorithm, prove its correctness, and use it as basis for encoder implementations capable of encoding any polar code up to a maximum length. We also investigate hardware and software implementations of decoders, describing how to implement flexible decoders that can decode any polar code up to a given length with little overhead and minor impact on decoding latency compared to...
Polar codes asymptotically achieve the symmetric capacity of memoryless channels, yet their error... more Polar codes asymptotically achieve the symmetric capacity of memoryless channels, yet their error-correcting performance under successive-cancellation (SC) decoding for short and moderate length codes is worse than that of other modern codes such as low-density parity-check (LDPC) codes. Of the many methods to improve the error-correction performance of polar codes, list decoding yields the best results, especially when the polar code is concatenated with a cyclic redundancy check (CRC). List decoding involves exploring several decoding paths with SC decoding, and therefore tends to be slower than SC decoding itself, by an order of magnitude in practical implementations. This is the second in a two-part series of papers on unrolled polar decoders. Part I focuses on hardware SC polar decoders. In this paper (Part II), we present a new algorithm based on unrolling the decoding tree of the code that improves the speed of list decoding by an order of magnitude when implemented in softwa...
Two mappings in a finite field, the Frobenius mapping and the cyclic shift mapping, are applied o... more Two mappings in a finite field, the Frobenius mapping and the cyclic shift mapping, are applied on lines in PG($n,p$) or codes in the Grassmannian, to form automorphisms groups in the Grassmanian and in its codes. These automorphisms are examined on two classical coding problems in the Grassmannian. The first is the existence of a parallelism with lines in the related projective geometry and the second is the existence of a Steiner structure. A computer search was applied to find parallelisms and codes. A new parallelism of lines in PG(5,3) was formed. A parallelism with these parameters was not known before. A large code which is only slightly short of a Steiner structure was formed.
Journal of Signal Processing Systems
The recently-discovered polar codes are seen as a major breakthrough in coding theory; they prova... more The recently-discovered polar codes are seen as a major breakthrough in coding theory; they provably achieve the theoretical capacity of discrete memoryless channels using the low complexity successive cancellation (SC) decoding algorithm. Motivated by recent developments in polar coding theory, we propose a family of efficient hardware implementations for SC polar decoders. We show that such decoders can be implemented with O(n) processing elements, O(n) memory elements, and can provide a constant throughput for a given target clock frequency. Furthermore, we show that SC decoding can be implemented in the logarithm domain, thereby eliminating costly multiplication and division operations and reducing the complexity of each processing element greatly. We also present a detailed architecture for an SC decoder and provide logic synthesis results confirming the linear growth in complexity of the decoder as the code length increases.
2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011
The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. The... more The recently-discovered polar codes are widely seen as a major breakthrough in coding theory. These codes achieve the capacity of many important channels under successive cancellation decoding. Motivated by the rapid progress in the theory of polar codes, we propose a family of architectures for efficient hardware implementation of successive cancellation decoders. We show that such decoders can be implemented with O(n) processing elements and O(n) memory elements, while providing constant throughput. We also propose a technique for overlapping the decoding of several consecutive codewords, thereby achieving a significant speed-up factor. We furthermore show that successive cancellation decoding can be implemented in the logarithmic domain, thereby eliminating the multiplication and division operations and greatly reducing the complexity of each processing element.
Linear Algebra and its Applications, 2013
The projective space of order n over the finite field F q , denoted here as P q (n), is the set o... more The projective space of order n over the finite field F q , denoted here as P q (n), is the set of all subspaces of the vector space F n q . The projective space can be endowed with distance function d S (X, Y ) = dim(X) + dim(Y ) − 2 dim(X ∩ Y ) which turns P q (n) into a metric space. With this, an (n, M, d) code C in projective space is a subset of P q (n) of size M such that the distance between any two codewords (subspaces) is at least d. Koetter and Kschischang recently showed that codes in projective space are precisely what is needed for error-correction in networks: an (n, M, d) code can correct t packet errors and ρ packet erasures introduced (adversarially) anywhere in the network as long as 2t + 2ρ < d. This motivates new interest in such codes.
Lecture Notes in Computer Science, 1993
A construction of perfect binary codes is presented. It is shown that this construction gives ris... more A construction of perfect binary codes is presented. It is shown that this construction gives rise to perfect codes that are nonequivalent to any of the previously known perfect codes. Furthermore, perfect codes C 1 and C 2 are constructed such that their intersection C 1C 2 has the maximum possible cardinality. The latter result is then employed to explicitly construct 22cn nonequivalent perfect codes of length n, for sufficiently large n and some constant c slightly less than 0.5.