Nikos Kanistras | University of Patras (original) (raw)
Papers by Nikos Kanistras
2016 5th International Conference on Modern Circuits and Systems Technologies (MOCAST), 2016
2012 International Conference on Embedded Computer Systems (SAMOS), 2012
ABSTRACT This paper introduces a methodology for forward error correction (FEC) architectures pro... more ABSTRACT This paper introduces a methodology for forward error correction (FEC) architectures prototyping, oriented to system verification and characterization. A complete design flow is described, which satisfies the requirement for error-free hardware design and acceleration of FEC simulations. FPGA devices give the designer the ability to observe rare events, due to tremendous speed-up of FEC operations. A Matlab-based system assists the investigation of the impact of very rare decoding failure events on the FEC system performance and the finding of solutions which aim to parameters optimization and BER performance improvement of LDPC codes in the error floor region. Furthermore, the development of an embedded system, which offers remote access to the system under test and verification process automation, is explored. The presented here prototyping approach exploits the high-processing speed of FPGA-based emulators and the observability and usability of software-based models.
2008 IEEE Workshop on Signal Processing Systems, 2008
In this paper the impact of the roundoff error on the decisions taken by the Log Sum-Product LDPC... more In this paper the impact of the roundoff error on the decisions taken by the Log Sum-Product LDPC decoding algorithm is studied. The mechanism, by means of which roundoff alters the decisions of a finite word length implementation of the algorithm compared to the infinite precision case, is analyzed and a corresponding theoretical model is developed. Experimental results prove the validity of the proposed model.
2008 3rd International Symposium on Wireless Pervasive Computing, 2008
ABSTRACT
2012 19th IEEE International Conference on Electronics, Circuits, and Systems (ICECS 2012), 2012
ABSTRACT Contemporary and next-generation wireless, wired and optical telecommunication systems r... more ABSTRACT Contemporary and next-generation wireless, wired and optical telecommunication systems rely on sophisticated forward error-correction (FEC) schemes to facilitate operation at particularly low Bit Error Rate (BER). The ever increasing demand for high information throughput rate, combined with requirements for moderate cost and low-power operation, renders the design of FEC systems a challenging task. The definition of the parity check matrix of an LDPC code is a crucial task as it defines both the computational complexity of the decoder and the error correction capabilities. However, the characterization of the corresponding code at low BER is a computationally intensive task that cannot be carried out with software simulation. We here demonstrate procedures that involve hardware acceleration to facilitate code design. In addition to code design, verification of operation at low BER requires strategies to prove correct operation of hardware, thus rendering FPGA prototyping a necessity. This paper demonstrates design techniques and verification strategies that allow proof of operation of a gigabit-rate FEC system at low BER, exploiting the state-of-the-art Virtex-7 technology. It is shown that by occupying up to 70% - 80% percent of slices on a Virtex-7 XC7V485T device, iterative decoding at gigabit rate can be verified.
2012 IEEE Workshop on Signal Processing Systems, 2012
ABSTRACT In this paper we investigate the propagation in the decoding procedure of the error due ... more ABSTRACT In this paper we investigate the propagation in the decoding procedure of the error due to the finite-word-length representation of the LLRs, for the case of LDPC codes. A model is developed that quantifies the impact of the quantization error of the LLRs on the decoding performance, in case of iterative decoding using the Min-Sum algorithm. An older model, also developed by the authors, exploits the new one in order to estimate the performance of various LLR quantization schemes. Proposed model estimation is compared with experimental BER results, in order to be validated.
2010 IEEE Workshop On Signal Processing Systems, 2010
In this paper we quantify the power of noise due to quantization and saturation of the LLRs. Subs... more In this paper we quantify the power of noise due to quantization and saturation of the LLRs. Subsequently a model is constructed using the obtained noise power expressions that can be used to estimate the performance of various LLR quantization schemes. The model is validated by comparing the estimation with experimental BER results for an LDPC-based system that uses the
2011 17th International Conference on Digital Signal Processing (DSP), 2011
This work improves the performance of LDPC decoders that implement iterative algorithms dominated... more This work improves the performance of LDPC decoders that implement iterative algorithms dominated by oscillatory behavior - such as offset Min-Sum algorithm - in cases of unsuccessful decoding of received words. The proposed LDPC decoder is applied on the decoding procedure of an LDPC algorithm by selecting the one of the N different estimated codewords (each produced at an iteration
2011 17th International Conference on Digital Signal Processing (DSP), 2011
The error correcting capability of LDPC based systems at low noise levels is often dominated by t... more The error correcting capability of LDPC based systems at low noise levels is often dominated by the so-called error floor, a region in the BER vs. noise level plot, where BER reduction slows down as the noise level decreases. The error floor behavior is commonly attributed to the sub-optimality of iterative decoding algorithms on graphs with cycles, which become trapped to local minimum solutions. Trapping of the decoder depends on several factors, including the decoding algorithm. A particular received word that is not decoded by a certain algorithm may be decoded successfully by a different algorithm. The proposed Multiple Decoder exploits this diverse behavior, by decoding a particular received word with N different algorithms, composing an LDPC decoder that achieves very low BER in the error floor region of operation, less iterations and higher throughput than the equivalent single decoder system.
2011 18th IEEE International Conference on Electronics, Circuits, and Systems, 2011
ABSTRACT This paper presents the algorithms and corresponding hardware architectures developed in... more ABSTRACT This paper presents the algorithms and corresponding hardware architectures developed in the context of the nexgen miliwave project, that compose the digital baseband processor of a 60GHz point-to-point link. The nexgen baseband processor provides all basic functionality required from a digital transmitter and receiver, including filtering, synchronization, equalization, and error correction. The techniques selected are capable of compensating impairments due to millimeter-wave front-end and yet support a throughput rate of more than one Gbp, with moderate hardware cost. As the nexgen link targets backhauling applications, a particularly low bit error rate specification of 10−12 has been adopted. Meeting the particular specification, as well as performance and complexity constraints, requires the adoption of sophisticated FEC techniques. Furthermore, extensive verification tasks need to be adopted which include hardware prototyping.
SiPS 2013 Proceedings, 2013
In this paper, a new theoretical model that describes the impact of the approximation error on th... more In this paper, a new theoretical model that describes the impact of the approximation error on the decisions taken by LDPC decoders is discussed. In particular, the theoretical model extends previous results and reconstructs the mechanism, by means of which the approximation error alters the decisions of the decoding algorithm, with respect to the decisions taken by the optimal decoding algorithm, namely Log Sum-Product. We focus on the most popular algorithm for LDPC decoding, namely Min-Sum and its also popular modifications, normalized and offset Min-Sum. The model is applied to all of these decoding algorithms, which are actually approximations of the Log Sum-Product. Moreover a method that exploits the output of the proposed model in order to estimate the decoding performance is also proposed. Finally, experimental results prove the validity of both the proposed model and the method, demonstrating the usefulness of this contribution towards achieving accurate decoding behavior prediction without relying on time-consuming simulations.
Journal of Signal Processing Systems, 2011
In this paper the impact of the approximation error on the decisions taken by LDPC decoders is st... more In this paper the impact of the approximation error on the decisions taken by LDPC decoders is studied. In particular, we analyze the mechanism, by means of which approximation error alters the decisions of a finite-word-length implementation of the decoding algorithm, with respect to the decisions taken by the infinite precision case, approximated here by double-precision floating-point. We focus on
2011 IEEE Workshop on Signal Processing Systems (SiPS), 2011
Abstract We consider the problem of rate-compatible (RC)-encoder and RC-puncturing of LDPC codes.... more Abstract We consider the problem of rate-compatible (RC)-encoder and RC-puncturing of LDPC codes. The proposed encoder is based on a modification of MacKay encoding scheme. The introduced modification enables the application of MacKay scheme for quasi ...
2016 5th International Conference on Modern Circuits and Systems Technologies (MOCAST), 2016
2012 International Conference on Embedded Computer Systems (SAMOS), 2012
ABSTRACT This paper introduces a methodology for forward error correction (FEC) architectures pro... more ABSTRACT This paper introduces a methodology for forward error correction (FEC) architectures prototyping, oriented to system verification and characterization. A complete design flow is described, which satisfies the requirement for error-free hardware design and acceleration of FEC simulations. FPGA devices give the designer the ability to observe rare events, due to tremendous speed-up of FEC operations. A Matlab-based system assists the investigation of the impact of very rare decoding failure events on the FEC system performance and the finding of solutions which aim to parameters optimization and BER performance improvement of LDPC codes in the error floor region. Furthermore, the development of an embedded system, which offers remote access to the system under test and verification process automation, is explored. The presented here prototyping approach exploits the high-processing speed of FPGA-based emulators and the observability and usability of software-based models.
2008 IEEE Workshop on Signal Processing Systems, 2008
In this paper the impact of the roundoff error on the decisions taken by the Log Sum-Product LDPC... more In this paper the impact of the roundoff error on the decisions taken by the Log Sum-Product LDPC decoding algorithm is studied. The mechanism, by means of which roundoff alters the decisions of a finite word length implementation of the algorithm compared to the infinite precision case, is analyzed and a corresponding theoretical model is developed. Experimental results prove the validity of the proposed model.
2008 3rd International Symposium on Wireless Pervasive Computing, 2008
ABSTRACT
2012 19th IEEE International Conference on Electronics, Circuits, and Systems (ICECS 2012), 2012
ABSTRACT Contemporary and next-generation wireless, wired and optical telecommunication systems r... more ABSTRACT Contemporary and next-generation wireless, wired and optical telecommunication systems rely on sophisticated forward error-correction (FEC) schemes to facilitate operation at particularly low Bit Error Rate (BER). The ever increasing demand for high information throughput rate, combined with requirements for moderate cost and low-power operation, renders the design of FEC systems a challenging task. The definition of the parity check matrix of an LDPC code is a crucial task as it defines both the computational complexity of the decoder and the error correction capabilities. However, the characterization of the corresponding code at low BER is a computationally intensive task that cannot be carried out with software simulation. We here demonstrate procedures that involve hardware acceleration to facilitate code design. In addition to code design, verification of operation at low BER requires strategies to prove correct operation of hardware, thus rendering FPGA prototyping a necessity. This paper demonstrates design techniques and verification strategies that allow proof of operation of a gigabit-rate FEC system at low BER, exploiting the state-of-the-art Virtex-7 technology. It is shown that by occupying up to 70% - 80% percent of slices on a Virtex-7 XC7V485T device, iterative decoding at gigabit rate can be verified.
2012 IEEE Workshop on Signal Processing Systems, 2012
ABSTRACT In this paper we investigate the propagation in the decoding procedure of the error due ... more ABSTRACT In this paper we investigate the propagation in the decoding procedure of the error due to the finite-word-length representation of the LLRs, for the case of LDPC codes. A model is developed that quantifies the impact of the quantization error of the LLRs on the decoding performance, in case of iterative decoding using the Min-Sum algorithm. An older model, also developed by the authors, exploits the new one in order to estimate the performance of various LLR quantization schemes. Proposed model estimation is compared with experimental BER results, in order to be validated.
2010 IEEE Workshop On Signal Processing Systems, 2010
In this paper we quantify the power of noise due to quantization and saturation of the LLRs. Subs... more In this paper we quantify the power of noise due to quantization and saturation of the LLRs. Subsequently a model is constructed using the obtained noise power expressions that can be used to estimate the performance of various LLR quantization schemes. The model is validated by comparing the estimation with experimental BER results for an LDPC-based system that uses the
2011 17th International Conference on Digital Signal Processing (DSP), 2011
This work improves the performance of LDPC decoders that implement iterative algorithms dominated... more This work improves the performance of LDPC decoders that implement iterative algorithms dominated by oscillatory behavior - such as offset Min-Sum algorithm - in cases of unsuccessful decoding of received words. The proposed LDPC decoder is applied on the decoding procedure of an LDPC algorithm by selecting the one of the N different estimated codewords (each produced at an iteration
2011 17th International Conference on Digital Signal Processing (DSP), 2011
The error correcting capability of LDPC based systems at low noise levels is often dominated by t... more The error correcting capability of LDPC based systems at low noise levels is often dominated by the so-called error floor, a region in the BER vs. noise level plot, where BER reduction slows down as the noise level decreases. The error floor behavior is commonly attributed to the sub-optimality of iterative decoding algorithms on graphs with cycles, which become trapped to local minimum solutions. Trapping of the decoder depends on several factors, including the decoding algorithm. A particular received word that is not decoded by a certain algorithm may be decoded successfully by a different algorithm. The proposed Multiple Decoder exploits this diverse behavior, by decoding a particular received word with N different algorithms, composing an LDPC decoder that achieves very low BER in the error floor region of operation, less iterations and higher throughput than the equivalent single decoder system.
2011 18th IEEE International Conference on Electronics, Circuits, and Systems, 2011
ABSTRACT This paper presents the algorithms and corresponding hardware architectures developed in... more ABSTRACT This paper presents the algorithms and corresponding hardware architectures developed in the context of the nexgen miliwave project, that compose the digital baseband processor of a 60GHz point-to-point link. The nexgen baseband processor provides all basic functionality required from a digital transmitter and receiver, including filtering, synchronization, equalization, and error correction. The techniques selected are capable of compensating impairments due to millimeter-wave front-end and yet support a throughput rate of more than one Gbp, with moderate hardware cost. As the nexgen link targets backhauling applications, a particularly low bit error rate specification of 10−12 has been adopted. Meeting the particular specification, as well as performance and complexity constraints, requires the adoption of sophisticated FEC techniques. Furthermore, extensive verification tasks need to be adopted which include hardware prototyping.
SiPS 2013 Proceedings, 2013
In this paper, a new theoretical model that describes the impact of the approximation error on th... more In this paper, a new theoretical model that describes the impact of the approximation error on the decisions taken by LDPC decoders is discussed. In particular, the theoretical model extends previous results and reconstructs the mechanism, by means of which the approximation error alters the decisions of the decoding algorithm, with respect to the decisions taken by the optimal decoding algorithm, namely Log Sum-Product. We focus on the most popular algorithm for LDPC decoding, namely Min-Sum and its also popular modifications, normalized and offset Min-Sum. The model is applied to all of these decoding algorithms, which are actually approximations of the Log Sum-Product. Moreover a method that exploits the output of the proposed model in order to estimate the decoding performance is also proposed. Finally, experimental results prove the validity of both the proposed model and the method, demonstrating the usefulness of this contribution towards achieving accurate decoding behavior prediction without relying on time-consuming simulations.
Journal of Signal Processing Systems, 2011
In this paper the impact of the approximation error on the decisions taken by LDPC decoders is st... more In this paper the impact of the approximation error on the decisions taken by LDPC decoders is studied. In particular, we analyze the mechanism, by means of which approximation error alters the decisions of a finite-word-length implementation of the decoding algorithm, with respect to the decisions taken by the infinite precision case, approximated here by double-precision floating-point. We focus on
2011 IEEE Workshop on Signal Processing Systems (SiPS), 2011
Abstract We consider the problem of rate-compatible (RC)-encoder and RC-puncturing of LDPC codes.... more Abstract We consider the problem of rate-compatible (RC)-encoder and RC-puncturing of LDPC codes. The proposed encoder is based on a modification of MacKay encoding scheme. The introduced modification enables the application of MacKay scheme for quasi ...