Finite Alphabet Iterative Decoding of LDPC Codes with Coarsely Quantized Neural Networks (original) (raw)

Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks

IEEE Transactions on Communications

In this paper, we propose a new approach to design finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We focus on the linear FAID class and use RQNNs to optimize the message update look-up tables by jointly training their message levels and RQNN parameters. Existing neural networks for channel coding work well over Additive White Gaussian Noise Channel (AWGNC) but are inefficient over BSC due to the finite channel values of BSC fed into neural networks. We propose the bit error rate (BER) as the loss function to train the RQNNs over BSC. The low precision activations in the RQNN and quantization in the BER cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We leverage straight-through estimators as surrogate gradients to tackle this issue and provide a joint training scheme. We show that the framework is flexible for various code lengths and column weights. Specifically, in high column weight case, it automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region. Index Terms-Binary symmetric channel, finite alphabet iterative decoders, low-density parity-check codes, quantized neural network, straight-through estimator. I. INTRODUCTION W ITH the great potential of solving problems related to optimization, function approximation, inference etc., deep neural networks (DNN) have drawn intensive attention in communication, signal processing and channel coding communities in past three years. One popular way to use neural networks (NN) in these areas is to combine the model knowledge (or the prototype algorithms) and the NNs together, and use the optimization techniques of NNs to improve the model.

Novel LDPC Decoder via MLP Neural Networks

In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on Multi-Layer Perceptron (MLP) neural networks is proposed. Due to the fact that in neural networks all procedures are processed in parallel, this method can be considered as a viable alternative to Message Passing Algorithm (MPA), with high computational complexity. Our proposed algorithm runs with soft criterion and concurrently does not use probabilistic quantities to decide what the estimated codeword is. Although the neural decoder performance is close to the error performance of Sum Product Algorithm (SPA), it is comparatively less complex. Therefore, the proposed decoder emerges as a new infrastructure for decoding LDPC codes.

Finite alphabet iterative decoders for LDPC codes surpassing floating-point iterative decoders

Electronics Letters, 2011

HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

On iterative decoding in some existing systems

IEEE Journal on Selected Areas in Communications, 2001

Iterative decoding is used to achieve backward compatible performance improvement in several existing systems. Concatenated coding and iterative decoding are first set up using composite mappings, so that various applications in digital communication and recording can be described in a concise and uniform manner. An ambiguity zone detection (AZD) based iterative decoder, operating on generalized erasures, is described as an alternative for concatenated systems where turbo decoding cannot be performed. Described iterative decoding techniques are then applied to selected wireless communication and digital recording systems. Simulation results and utilization of decoding gains are briefly discussed.

Multi Variable-layer Neural Networks for Decoding Linear Codes

2020 Iran Workshop on Communication and Information Theory (IWCIT), 2020

The belief propagation algorithm is a state of the art decoding technique for a variety of linear codes such as LDPC codes. The iterative structure of this algorithm is reminiscent of a neural network with multiple layers. Indeed, this similarity has been recently exploited to improve the decoding performance by tuning the weights of the equivalent neural network. In this paper, we introduce a new network architecture by increasing the number of variable-node layers, while keeping the check-node layers unchanged. The changes are applied in a manner that the decoding performance of the network becomes independent of the transmitted codeword; hence, a training stage with only the all-zero codeword shall be sufficient. Simulation results on a number of well-studied linear codes, besides an improvement in the decoding performance, indicate that the new architecture is also simpler than some of the existing decoding networks.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.