mahmoud ahmadian Attari - Academia.edu (original) (raw)
Papers by mahmoud ahmadian Attari
Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. T... more Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle c of length ℓ(c) in the base graph such that the inverse image of c in the lifted graph consists of only cycles of length strictly larger than ℓ(c). The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel.
2018 29th Biennial Symposium on Communications (BSC), 2018
In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Offi... more In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Officer (CEO) problem under logarithmic loss criterion. The exact rate-distortion bound for a two-link binary CEO problem under the logarithmic loss has been obtained by Courtade and Weissman. We propose an encoding scheme based on compound LDGM-LDPC codes to achieve the theoretical bounds. In the proposed encoding, a binary quantizer using LDGM codes and a syndrome-coding employing LDPC codes are applied. An iterative joint decoding is also designed as a fusion center. The proposed CEO decoder is based on the sum-product algorithm and a soft estimator.
IEEE Transactions on Communications, 2019
The L-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in... more The L-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in this paper. A quantization splitting technique is applied to convert the problem under consideration to a (2L − 1)-step successive Wyner-Ziv (WZ) problem, for which a practical coding scheme is proposed. In the proposed scheme, low-density generator-matrix (LDGM) codes are used for binary quantization while low-density parity-check (LDPC) codes are used for syndrome generation; the decoder performs successive decoding based on the received syndromes and produces M. Nangir was with the Faculty of Electrical Engineering, K.
IEEE Communications Letters, 2016
In this work, we introduce an algorithm to enhance the success threshold of node-based verificati... more In this work, we introduce an algorithm to enhance the success threshold of node-based verification-based (NB-VB) algorithms in compressed sensing (CS). The NB-VB algorithms have low computational complexity and are generally classified as iterative message passing algorithms employed for signal recovery. However, similar to standard iterative decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC) in the context of channel coding, these algorithms become inefficient in the stopping sets. The proposed method, with inspiration of improved decoding algorithms over the BEC, enhances the performance of NB-VB algorithms by guessing values of some unverified signal elements. Our simulation results indicate that although the proposed method improves the success threshold significantly, it does not cause any considerable increase in complexity of standard NB-VB algorithms. Index Terms-Compressed sensing (CS), low-density paritycheck (LDPC) code, node-based verification-based (NB-VB) algorithms, success threshold, stopping set.
IEEE Transactions on Communications, 2012
We propose a technique to design finite-length irregular low-density parity-check (LDPC) codes ov... more We propose a technique to design finite-length irregular low-density parity-check (LDPC) codes over the binary-input additive white Gaussian noise (AWGN) channel with good performance in both the waterfall and the error floor region. The design process starts from a protograph which embodies a desirable degree distribution. This protograph is then lifted cyclically to a certain block length of interest. The lift is designed carefully to satisfy a certain approximate cycle extrinsic message degree (ACE) spectrum. The target ACE spectrum is one with extremal properties, implying a good error floor performance for the designed code. The proposed construction results in quasi-cyclic codes which are attractive in practice due to simple encoder and decoder implementation. Simulation results are provided to demonstrate the effectiveness of the proposed construction in comparison with similar existing constructions. Index Terms Low-density parity-check (LDPC) codes, irregular LDPC codes, finite-length LDPC codes, error floor, cyclic lifting, quasi-cyclic LDPC codes, approximate cycle extrinsic message degree (ACE), ACE spectrum, protograph, AWGN channel.
IEEE Transactions on Wireless Communications, 2019
In this paper, an adaptive channel estimation algorithm is proposed for the multi-user robust rel... more In this paper, an adaptive channel estimation algorithm is proposed for the multi-user robust relay beamforming problem. We propose a norm-bounded channel uncertainty model for all of the channels. We employ the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) for joint estimation of channel coefficients and beamforming weights, and propose a Markov model for source-relay and relay-destination channels as well as the beamforming weights in the relays. The channel coefficients and bemforming weights are shown to be well-estimated in order to minimize the total relays power transmission subject to worst-case signal to interference and noise ratio (SINR) criterion at each receiver. As the main contribution of this paper, we propose an adaptive method for simultaneous estimation of the beamforming weights and channel states information, and solving the associated optimization problem by estimation tools. Furthermore, we show that our algorithm outperforms the interior point based methods for non-linear optimization. In comparison to our recent work, a sub-optimal solution to the non-convex robust relay beamforming problem was provided, the proposed method has superior performance and lower complexity.
Wireless Personal Communications, 2015
In this paper, a robust relay beamforming problem has been dealt with by using the stochastic app... more In this paper, a robust relay beamforming problem has been dealt with by using the stochastic approach with imperfect knowledge of channel state information. Specifically, we have modeled the channel estimation error as a Gaussian random variable. Our objective is to minimize the total relay's transmit power subject to outage SINR constraint criterion at each receiver. We aim to establish approximate probabilistic SINR constrained formulations in the form of convex conic optimization problem. It is shown that the original robust problem is a nonconvex problem, but it can be relaxed to an SDP convex problem using the semi-definite relaxation technique. In this paper, we have compared our stochastic method with the worst-case robust method in terms of power consumption efficiency and complexity. Simulation results verify that our robust method outperforms the worst case approach.
IET Information Security, 2016
One of the approaches to modify the McEliece cryptosystem to overcome its large key size is repla... more One of the approaches to modify the McEliece cryptosystem to overcome its large key size is replacing binary Goppa codes with a new structured code. However, this modification makes such cryptosystems encounter some new attacks. There are a few modified McEliece cryptosystem variants which are known to be secure. One of them is the cryptosystem introduced by Baldi et al. which uses quasi-cyclic low-density parity check (QC-LDPC) codes. This cryptosystem is still unbroken as no efficient attack has been reported against it since 2008. In this study, an attack has been applied to this cryptosystem which is feasible when the code length is a multiple of a power of 2. Also an important weakness of this kind of cryptosystem has been pointed out, namely utilising a too low-weight intentional error vector. The authors have established a new security level for this cryptosystem which is applicable to other McEliece-like cryptosystems using QC-LDPC codes. This security level for instance is 29.18 times lower than previous ones in the case of n = 4 x 4096 when only one ciphertext is available. The gain of the attack in this study can be increased if more than one ciphertext is available.
Journal of Information Security and Applications
IET Communications
In this paper, a practical coding scheme is designed for the binary Wyner-Ziv (WZ) problem by usi... more In this paper, a practical coding scheme is designed for the binary Wyner-Ziv (WZ) problem by using nested low-density generator-matrix (LDGM) and low-density paritycheck (LDPC) codes. This scheme contains two steps in the encoding procedure. The first step involves applying the binary quantization by employing LDGM codes and the second one is using the syndrome-coding technique by utilizing LDPC codes. The decoding algorithm of the proposed scheme is based on the Sum-Product (SP) algorithm with the help of a side information available at the decoder side. It is theoretically shown that the compound structure has the capability of achieving the WZ bound. The proposed method approaches this bound by utilizing the iterative message-passing algorithms in both encoding and decoding, although theoretical results show that it is asymptotically achievable.
Wireless Personal Communications
ABSTRACT In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on... more ABSTRACT In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on Multi-Layer Perceptron (MLP) neural networks is proposed. Due to the fact that in neural networks all procedures are processed in parallel, this method can be considered as a viable alternative to Message Passing Algorithm (MPA), with high computational complexity. Our proposed algorithm runs with soft criterion and concurrently does not use probabilistic quantities to decide what the estimated codeword is. Although the neural decoder performance is close to the error performance of Sum Product Algorithm (SPA), it is comparatively less complex. Therefore, the proposed decoder emerges as a new infrastructure for decoding LDPC codes.
IEEE Transactions on Vehicular Technology, 2016
Eprint Arxiv 1111 2430, Nov 10, 2011
We consider a relay network with two relays and two feedback links from the relays to the sender.... more We consider a relay network with two relays and two feedback links from the relays to the sender. To obtain the achievability results, we use the compress-and-forward and the decode-and-forward strategies to superimpose facility and cooperation analogue to what proposed by Cover and El Gamal for a relay channel. In addition to random binning, we use deterministic binning to perform restricted decoding. We show how to use the feedback links for cooperation between the sender and the relays to transmit the information which is compressed in the sender and the relays.
We consider a relay network with two relays and a feedback link from the receiver to the sender. ... more We consider a relay network with two relays and a feedback link from the receiver to the sender. To obtain the achievability result, we use compress-and-forward and random binning techniques combined with deterministic binning and restricted decoding. Moreover, we use joint decoding technique to decode the relays' compressed information to achieve a higher rate in the receiver.
IEEE Transactions on Information Forensics and Security, 2016
IEEE Transactions on Communications, 2016
Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. T... more Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle c of length ℓ(c) in the base graph such that the inverse image of c in the lifted graph consists of only cycles of length strictly larger than ℓ(c). The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel.
2018 29th Biennial Symposium on Communications (BSC), 2018
In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Offi... more In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Officer (CEO) problem under logarithmic loss criterion. The exact rate-distortion bound for a two-link binary CEO problem under the logarithmic loss has been obtained by Courtade and Weissman. We propose an encoding scheme based on compound LDGM-LDPC codes to achieve the theoretical bounds. In the proposed encoding, a binary quantizer using LDGM codes and a syndrome-coding employing LDPC codes are applied. An iterative joint decoding is also designed as a fusion center. The proposed CEO decoder is based on the sum-product algorithm and a soft estimator.
IEEE Transactions on Communications, 2019
The L-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in... more The L-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in this paper. A quantization splitting technique is applied to convert the problem under consideration to a (2L − 1)-step successive Wyner-Ziv (WZ) problem, for which a practical coding scheme is proposed. In the proposed scheme, low-density generator-matrix (LDGM) codes are used for binary quantization while low-density parity-check (LDPC) codes are used for syndrome generation; the decoder performs successive decoding based on the received syndromes and produces M. Nangir was with the Faculty of Electrical Engineering, K.
IEEE Communications Letters, 2016
In this work, we introduce an algorithm to enhance the success threshold of node-based verificati... more In this work, we introduce an algorithm to enhance the success threshold of node-based verification-based (NB-VB) algorithms in compressed sensing (CS). The NB-VB algorithms have low computational complexity and are generally classified as iterative message passing algorithms employed for signal recovery. However, similar to standard iterative decoding of low-density parity-check (LDPC) codes over the binary erasure channel (BEC) in the context of channel coding, these algorithms become inefficient in the stopping sets. The proposed method, with inspiration of improved decoding algorithms over the BEC, enhances the performance of NB-VB algorithms by guessing values of some unverified signal elements. Our simulation results indicate that although the proposed method improves the success threshold significantly, it does not cause any considerable increase in complexity of standard NB-VB algorithms. Index Terms-Compressed sensing (CS), low-density paritycheck (LDPC) code, node-based verification-based (NB-VB) algorithms, success threshold, stopping set.
IEEE Transactions on Communications, 2012
We propose a technique to design finite-length irregular low-density parity-check (LDPC) codes ov... more We propose a technique to design finite-length irregular low-density parity-check (LDPC) codes over the binary-input additive white Gaussian noise (AWGN) channel with good performance in both the waterfall and the error floor region. The design process starts from a protograph which embodies a desirable degree distribution. This protograph is then lifted cyclically to a certain block length of interest. The lift is designed carefully to satisfy a certain approximate cycle extrinsic message degree (ACE) spectrum. The target ACE spectrum is one with extremal properties, implying a good error floor performance for the designed code. The proposed construction results in quasi-cyclic codes which are attractive in practice due to simple encoder and decoder implementation. Simulation results are provided to demonstrate the effectiveness of the proposed construction in comparison with similar existing constructions. Index Terms Low-density parity-check (LDPC) codes, irregular LDPC codes, finite-length LDPC codes, error floor, cyclic lifting, quasi-cyclic LDPC codes, approximate cycle extrinsic message degree (ACE), ACE spectrum, protograph, AWGN channel.
IEEE Transactions on Wireless Communications, 2019
In this paper, an adaptive channel estimation algorithm is proposed for the multi-user robust rel... more In this paper, an adaptive channel estimation algorithm is proposed for the multi-user robust relay beamforming problem. We propose a norm-bounded channel uncertainty model for all of the channels. We employ the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF) for joint estimation of channel coefficients and beamforming weights, and propose a Markov model for source-relay and relay-destination channels as well as the beamforming weights in the relays. The channel coefficients and bemforming weights are shown to be well-estimated in order to minimize the total relays power transmission subject to worst-case signal to interference and noise ratio (SINR) criterion at each receiver. As the main contribution of this paper, we propose an adaptive method for simultaneous estimation of the beamforming weights and channel states information, and solving the associated optimization problem by estimation tools. Furthermore, we show that our algorithm outperforms the interior point based methods for non-linear optimization. In comparison to our recent work, a sub-optimal solution to the non-convex robust relay beamforming problem was provided, the proposed method has superior performance and lower complexity.
Wireless Personal Communications, 2015
In this paper, a robust relay beamforming problem has been dealt with by using the stochastic app... more In this paper, a robust relay beamforming problem has been dealt with by using the stochastic approach with imperfect knowledge of channel state information. Specifically, we have modeled the channel estimation error as a Gaussian random variable. Our objective is to minimize the total relay's transmit power subject to outage SINR constraint criterion at each receiver. We aim to establish approximate probabilistic SINR constrained formulations in the form of convex conic optimization problem. It is shown that the original robust problem is a nonconvex problem, but it can be relaxed to an SDP convex problem using the semi-definite relaxation technique. In this paper, we have compared our stochastic method with the worst-case robust method in terms of power consumption efficiency and complexity. Simulation results verify that our robust method outperforms the worst case approach.
IET Information Security, 2016
One of the approaches to modify the McEliece cryptosystem to overcome its large key size is repla... more One of the approaches to modify the McEliece cryptosystem to overcome its large key size is replacing binary Goppa codes with a new structured code. However, this modification makes such cryptosystems encounter some new attacks. There are a few modified McEliece cryptosystem variants which are known to be secure. One of them is the cryptosystem introduced by Baldi et al. which uses quasi-cyclic low-density parity check (QC-LDPC) codes. This cryptosystem is still unbroken as no efficient attack has been reported against it since 2008. In this study, an attack has been applied to this cryptosystem which is feasible when the code length is a multiple of a power of 2. Also an important weakness of this kind of cryptosystem has been pointed out, namely utilising a too low-weight intentional error vector. The authors have established a new security level for this cryptosystem which is applicable to other McEliece-like cryptosystems using QC-LDPC codes. This security level for instance is 29.18 times lower than previous ones in the case of n = 4 x 4096 when only one ciphertext is available. The gain of the attack in this study can be increased if more than one ciphertext is available.
Journal of Information Security and Applications
IET Communications
In this paper, a practical coding scheme is designed for the binary Wyner-Ziv (WZ) problem by usi... more In this paper, a practical coding scheme is designed for the binary Wyner-Ziv (WZ) problem by using nested low-density generator-matrix (LDGM) and low-density paritycheck (LDPC) codes. This scheme contains two steps in the encoding procedure. The first step involves applying the binary quantization by employing LDGM codes and the second one is using the syndrome-coding technique by utilizing LDPC codes. The decoding algorithm of the proposed scheme is based on the Sum-Product (SP) algorithm with the help of a side information available at the decoder side. It is theoretically shown that the compound structure has the capability of achieving the WZ bound. The proposed method approaches this bound by utilizing the iterative message-passing algorithms in both encoding and decoding, although theoretical results show that it is asymptotically achievable.
Wireless Personal Communications
ABSTRACT In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on... more ABSTRACT In this paper, a new method for decoding Low Density Parity Check (LDPC) codes, based on Multi-Layer Perceptron (MLP) neural networks is proposed. Due to the fact that in neural networks all procedures are processed in parallel, this method can be considered as a viable alternative to Message Passing Algorithm (MPA), with high computational complexity. Our proposed algorithm runs with soft criterion and concurrently does not use probabilistic quantities to decide what the estimated codeword is. Although the neural decoder performance is close to the error performance of Sum Product Algorithm (SPA), it is comparatively less complex. Therefore, the proposed decoder emerges as a new infrastructure for decoding LDPC codes.
IEEE Transactions on Vehicular Technology, 2016
Eprint Arxiv 1111 2430, Nov 10, 2011
We consider a relay network with two relays and two feedback links from the relays to the sender.... more We consider a relay network with two relays and two feedback links from the relays to the sender. To obtain the achievability results, we use the compress-and-forward and the decode-and-forward strategies to superimpose facility and cooperation analogue to what proposed by Cover and El Gamal for a relay channel. In addition to random binning, we use deterministic binning to perform restricted decoding. We show how to use the feedback links for cooperation between the sender and the relays to transmit the information which is compressed in the sender and the relays.
We consider a relay network with two relays and a feedback link from the receiver to the sender. ... more We consider a relay network with two relays and a feedback link from the receiver to the sender. To obtain the achievability result, we use compress-and-forward and random binning techniques combined with deterministic binning and restricted decoding. Moreover, we use joint decoding technique to decode the relays' compressed information to achieve a higher rate in the receiver.
IEEE Transactions on Information Forensics and Security, 2016
IEEE Transactions on Communications, 2016