Lowered-Complexity Soft Decoding of Generalized LDPC Codes over AWGN Channels (original) (raw)

Improving the Decoding Performance of High-Rate GLDPC Codes in Low Error-Rate Applications

This paper presents a new evaluation and performance comparison of a reliability-based iterative decoder for Generalized LDPC codes with BCH subcodes, using Soft-Input Soft-Output (SISO) Chase algorithm, compared with the same algorithm employing Hamming subcodes. While the single-error correction code with length n is accorded a preference over the double error correction one with length 2n (in terms of error performance over AWGN channels), this article shows that the GLDPC code with BCH subcodes surpasses the Hamming-based one under nearly the same overall code rate. It also converges in less than half the number of iterations compared to different corresponding Hamming-based soft-decision decoding (SDD) systems. As a case study, Gallager-based global LDPC code is applied and the decoder is considered over AWGN channel. The simulation results are showing the performance against the code block length, number of iterations and Chase algorithm parameters.

Improved Decoding Algorithms of LDPC Codes Based on Reliability Metrics of Variable Nodes

IEEE Access, 2019

The informed dynamic scheduling (IDS) strategies for decoding of low-density parity-check codes obtained superior performance in error correction performance and convergence speed. However, there are still two problems existing in the current IDS algorithms. The first is that the current IDS algorithms only preferentially update the selected unreliable messages, but they do not guarantee the updating is performed with reliable information. In the paper, a two-step message selecting strategy is introduced. On the basis of the two reliability metrics and two types of variable node residuals, the residual BP decoding algorithm, short for TRM-TVRBP, is proposed. With the algorithm, the reliability of the updating-messages can be improved. The second is the greediness problem, prevalently existed in the IDSlike algorithms. The problem arises mainly from the fact that the major computing resources are allocated to or concentrated on some nodes and edges. To overcome the problem, the reliability metric-based RBP algorithm (RM-RBP) is proposed, which can force every variable node to contribute its intrinsic information to the iterative decoding. At the same time, the algorithm can force the related variable nodes to be updated, and make every edge have an equal opportunity of being updated. Simulation results show that both the TRM-TVRBP and the RM-RBP have appealing convergence rate and error-correcting performance compared to the previous IDS decoders over the white Gaussian noise (AWGN) channel. INDEX TERMS Low-density parity-check (LDPC) codes, dynamic selection strategies, dynamic updating strategies, residuals of variable nodes.

Iterative reliability-based decoding of low-density parity check codes

IEEE Journal on Selected Areas in Communications, 2001

In this paper, reliability based decoding is combined with belief propagation (BP) decoding for low-density parity check (LDPC) codes. At each iteration, the soft output values delivered by the BP algorithm are used as reliability values to perform reduced complexity soft decision decoding of the code considered. This approach allows to bridge the error performance gap between belief propagation decoding which remains suboptimum, and maximum likelihood decoding which is too complex to be implemented for the codes considered. Trade-offs between decoding complexity and error performance are also investigated. In particular, a stopping criterion which reduces the average number of iterations at the expense of very little performance degradation is proposed for this combined decoding approach. Simulations results for several Gallager LDPC codes and different set cyclic codes of hundreds of information bits are given and elaborated.

Bootstrapped Iterative Decoding Algorithms for Low Density Parity Check (LDPC) Codes

2010

Reliability ratio based weighted bit-flipping algorithm is one of the best hard decision decoding algorithms in performance. Recently several modifications are done to this technique either to improve performance or to lower the complexity. The implementation efficient reliability ratio based weighted bit-flipping is developed targeting decreasing processing time of the decoding process. In this paper we are targeting improving performance of recent developed algorithm named low complex implementation efficient reliability ratio based weighted bit-flipping by adding a bootstrap step to the decoding technique which leads to increase in reliability of received bits then number of decoded bits will be increased leading to improve in performance. Also a modification done to bootstrap step for further increase the performance.

Improved BP-Based Decoding Algorithms Integrated with GA for LDPC Codes

Advances in Intelligent and Soft Computing, 2009

We proposed two improved decoding algorithms for Low-Density Parity-Check (LDPC) codes based on the Belief-Propagation (BP) algorithm combined with Genetic Algorithm (GA). After giving a genetic interpretation of Tanner graph, GA is adopted to efficiently use the information passing from the variable nodes to the check nodes. Simulation results assert the superiority of our proposed algorithms over the BP algorithm both in BER (Bit Error Rate) and FER (Frame Error Rate). At last, optimization of the key parameter of the developed algorithms is given.

Efficient use of a hybrid decoding technique for LDPC codes

EURASIP Journal on Wireless Communications and Networking, 2014

A word error rate (WER) reducing approach for a hybrid iterative error and erasure decoding algorithm for low-density parity-check codes is described. A lower WER is achieved when the maximum number of iterations of the min-sum belief propagation decoder stage is set to certain specific values which are code dependent. By proper choice of decoder parameters, this approach reduces WER by about 2 orders of magnitude for an equivalent decoding complexity. Computer simulation results are given for the efficient use of this hybrid decoding technique in the presence of additive white Gaussian noise.

Generalized Low-Density Parity-Check Codes: Construction and Decoding Algorithms

Error Detection and Correction [Working Title]

Scientists have competed to find codes that can be decoded with optimal decoding algorithms. Generalized LDPC codes were found to compare well with such codes. LDPC codes are well treated with both types of decoding; HDD and SDD. On the other hand GLDPC codes iterative decoding, on both AWGN and BSC channels, was not sufficiently investigated in the literature. This chapter first describes its construction then discusses its iterative decoding algorithms on both channels so far. The SISO decoders, of GLDPC component codes, show excellent error performance with moderate and high code rate. However, the complexities of such decoding algorithms are very high. When the HDD BF algorithm presented to LDPC for its simplicity and speed, it was far from the BSC capacity. Therefore involving LDPC codes in optical systems using such algorithms is a wrong choice. GLDPC codes can be introduced as a good alternative of LDPC codes as their performance under BF algorithm can be improved and they would then be a competitive choice for optical communications. This chapter will discuss the iterative HDD algorithms that improve decoding error performance of GLDPC codes. SDD algorithms that maintain the performance but lowering decoding simplicity are also described.

New low-density-parity-check decoding approach based on the hard and soft decisions algorithms

International Journal of Electrical and Computer Engineering (IJECE), 2023

It is proved that hard decision algorithms are more appropriate than a soft decision for low-density parity-check (LDPC) decoding since they are less complex at the decoding level. On the other hand, it is notable that the soft decision algorithm outperforms the hard decision one in terms of the bit error rate (BER) gap. In order to minimize the BER and the gap between these two families of LDPC codes, a new LDPC decoding algorithm is suggested in this paper, which is based on both the normalized min-sum (NMS) and modified-weighted bit-flipping (MWBF). The proposed algorithm is named normalized min sum-modified weighted bit flipping (NMSMWBF). The MWBF is executed after the NMS algorithm. The simulations show that our algorithm outperforms the NMS in terms of BER at 10-8 over the additive white gaussian noise (AWGN) channel by 0.25 dB. Furthermore, the proposed NMSMWBF and the NMS are both at the same level of decoding difficulty.

Decoding Techniques of Error Control Codes called LDPC

This paper deals with the design and decoding of an extremely powerful and flexible family of codes called low-density parity-check (LDPC) codes. LDPC codes can be designed to perform close to the capacity of many different types of channels with a practical decoding complexity. It is conjectured that they can achieve the capacity of many channels and, indeed, they have been shown to achieve the capacity of the binary erasure (BEC) channel, under iterative decoding. With help of this paper LDPC codes and their decoding techniques are explained with overview of LDPC.

Enhancing the Error-Correcting Performance of LDPCCodes through an Efficient Use of Decoding Iterations

2013

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the errorcorrecting performance keeps increasing with increasing number of iterations.