Efficient use of a hybrid decoding technique for LDPC codes (original) (raw)
Related papers
Improving BER Performance of LDPC Codes based on Intermediate Decoding Results
2007 IEEE International Conference on Signal Processing and Communications, 2007
The paper presents a novel approach to reduce the bit error rate (BER) in iterative belief propagation (BP) decoding of low density parity check (LDPC) codes. The behavior of the BP algorithm is first investigated as a function of number of decoder iterations, and it is shown that typical uncorrected error patterns can be classified into 3 categories: oscillating, nearly-constant, or randomlike, with a predominance of oscillating patterns at high Signal-to-Noise (SNR) values. A proposed decoder modification is then introduced based on tracking the number of failed parity check equations in the intermediate decoding iterations, rather than relying on the final decoder output (after reaching the maximum number of iterations). Simulation results with a rate ½ (1024,512) progressive edge-growth (PEG) LDPC code show that the proposed modification can decrease the BER by as much as 10-to-40%, particularly for high SNR values.
Transactions Papers Reduced-Complexity Decoding of LDPC Codes
—Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reduced-complexity derivatives for low-density parity-check (LDPC) codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update, or symbol-node update, or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computational-complexity, and memory-requirement perspectives.
Augmented Belief-Propagation Decoding of Low-Density Parity-Check Codes
IEEE Transactions on Communications, 2000
We propose an augmented belief propagation (BP) decoder for low-density parity check (LDPC) codes which can be utilized on memoryless or intersymbol interference channels. The proposed method is a heuristic algorithm that eliminates a large number of pseudocodewords that can cause nonconvergence in the BP decoder. The augmented decoder is a multistage iterative decoder, where, at each stage, the original channel messages on select symbol nodes are replaced by saturated messages. The key element of the proposed method is the symbol selection process, which is based on the appropriately defined subgraphs of the code graph and/or the reliability of the information received from the channel. We demonstrate by examples that this decoder can be implemented to achieve substantial gains (compared to the standard locally-operating BP decoder) for short LDPC codes decoded on both memoryless and intersymbol interference Gaussian channels. Using the Margulis code example, we also show that the augmented decoder reduces the error floors. Finally, we discuss types of BP decoding errors and relate them to the augmented BP decoder.
Iterative reliability-based decoding of low-density parity check codes
IEEE Journal on Selected Areas in Communications, 2001
In this paper, reliability based decoding is combined with belief propagation (BP) decoding for low-density parity check (LDPC) codes. At each iteration, the soft output values delivered by the BP algorithm are used as reliability values to perform reduced complexity soft decision decoding of the code considered. This approach allows to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered. Trade-offs between decoding complexity and error performance are also investigated. In particular, a stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach. Simulations results for several Gallager LDPC codes and different set cyclic codes of hundreds of information bits are given and elaborated.
Reduced-Complexity Decoding of LDPC Codes
IEEE Transactions on Communications, 2005
Various log-likelihood-ratio-based belief-propagation (LLR-BP) decoding algorithms and their reducedcomplexity derivatives for LDPC codes are presented. Numerically accurate representations of the check-node update computation used in LLR-BP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the check-node update or symbol-node update or both. In particular, two main approaches for simplified check-node updates are presented that are based on the so-called min-sum approximation coupled with either a normalization term or an additive offset term. Density evolution (DE) is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reduced-complexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from a performance, latency, computational-complexity and memory-requirement perspective.
A decoding algorithm for LDPC codes over erasure channels with sporadic errors
2010
An efficient decoding algorithm for low-density parity-check (LDPC) codes on erasure channels with sporadic errors (i.e., binary error-and-erasure channels with error probability much smaller than the erasure probability) is proposed and its performance analyzed. A general single-error multipleerasure (SEME) decoding algorithm is first described, which may be in principle used with any binary linear block code. The algorithm is optimum whenever the non-erased part of the received word is affected by at most one error, and is capable of performing error detection of multiple errors. An upper bound on the average block error probability under SEME decoding is derived for the linear random code ensemble. The bound is tight and easy to implement. The algorithm is then adapted to LDPC codes, resulting in a simple modification to a previously proposed efficient maximum likelihood LDPC erasure decoder which exploits the parity-check matrix sparseness. Numerical results reveal that LDPC codes under efficient SEME decoding can closely approach the average performance of random codes.
2013
The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the errorcorrecting performance keeps increasing with increasing number of iterations.
IET Communications, 2012
In this paper, we propose an improved version of the min-sum algorithm for low density parity check (LDPC) code decoding, which we call "adaptive normalized BP-based" algorithm. Our decoder provides a compromise solution between the belief propagation and the min-sum algorithms by adding an exponent offset to each variable node's intrinsic information in the check node update equation. The extrinsic information from the min-sum decoder is then adjusted by applying a negative power of two scale factor, which can be easily implemented by right shifting the min-sum extrinsic information. The difference between our approach and other adaptive normalized min-sum decoders is that we select the normalization scale factor using a clear analytical approach based on underlying principles. Simulation results show that the proposed decoder outperforms the min-sum decoder and performs very close to the BP decoder, but with lower complexity.
Decoding Techniques of Error Control Codes called LDPC
This paper deals with the design and decoding of an extremely powerful and flexible family of codes called low-density parity-check (LDPC) codes. LDPC codes can be designed to perform close to the capacity of many different types of channels with a practical decoding complexity. It is conjectured that they can achieve the capacity of many channels and, indeed, they have been shown to achieve the capacity of the binary erasure (BEC) channel, under iterative decoding. With help of this paper LDPC codes and their decoding techniques are explained with overview of LDPC.
Finite Alphabet Iterative Decoder for Low Density Parity Check Codes
As the Low Density Parity Check (LDPC) code has Shannon limit approach error correcting performance so this code is used in many application. The iterative belief propagation algorithms as well as the approximations of the belief propagation algorithm used for the decoding purpose of the LDPC codes, But the belief propagation algorithms based decoding of the LDPC codes suffers the error floor problem. On finite length codes the error correcting performance curve in the low error rate region can flatten out due to the presence of cycles in the corresponding tanner graph. This is known as the error floor. This happens because decoding converges to the trapping sets and cannot correct all errors even if more numbers of decoding iterations carried out. The performance in the error floor region is important for applications which require very low error rate like flash memory and optical communications. To overcome this problem, a new type of decoder is proposed i.e. Finite Alphabet Iterative Decoders (FAIDs), were developed for the LDPC codes. In this decoder the messages are represented by alphabets with a very small number of levels and the variable to check messages are derived from the check to variable messages. The channel information given through the predefined Boolean map i.e. designed to optimize the error correcting capability in the error floor region. The FAIDs can better than the floating point BP decoders in the error floor region over the Binary Symmetric Channel (BSC). In addition multiple FAIDs with different map functions can be developed to further improve the performance with higher complexity.