Computing the Minimum Distance of Linear Codes by the Error Impulse Method (original) (raw)

Optimizing the free distance of Error-Correcting Variable-Length Codes

2010 IEEE International Workshop on Multimedia Signal Processing, 2010

This paper considers the optimization of Error-Correcting Variable-Length Codes (EC-VLC), which are a class of joint-source channel codes. The aim is to find a prefix-free codebook with the largest possible free distance for a given set of codeword lengths, ℓ = (ℓ1, ℓ2,. .. , ℓM). The proposed approach consists in ordering all possible codebooks associated to ℓ on a tree, and then to apply an efficient branch-and-prune algorithm to find a codebook with maximal free distance. Three methods for building the tree of codebooks are presented and their efficiency is compared.

Unique and Minimum Distance Decoding of Linear Codes with Reduced Complexity

Communications in Computer and Information Science, 2011

 Abstract-We show that for (systematic) linear codes the time complexity of unique decoding is     2/ 2 nRH R O n q  and the time complexity of minimum distance decoding is     2 nRH R O n q  . The proposed algorithm inspects all error patterns in the information set of the received message of weight less than 2 d or d , respectively. Index Terms-nearest neighbor decoding, unique decoding, bounded distance decoding, minimum distance decoding.

On the Computing of the Minimum Distance of Linear Block Codes by Heuristic Methods

The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call "Multiple Impulse Method (MIM)", where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.

Error Correction in Linear Codes with Computer

fbe.dumlupinar.edu.tr, 2008

Within the context of this study, we have improved a computer program on the syndrome decoding method for correcting codewords received incorrect. This program generates code from given generator matrix and calculates the hamming distance which appertains to this code, also, it finds Slepian(1960) standart array. It gives a list of the decoding table of code and an error pattern relating to be received incorrect codewords. We have used Maple Computer Algebra for calculations [5]. The Algorithm, we have given, reproduces different results from given generator matrix. In here, we have chosen a test problem whose code word's lengthy is 4. But, it is possible for algorithm that it can make calculations for longer codes.

Error-Correction Capability of Binary Linear Codes

IEEE Transactions on Information Theory, 2005

The monotone structure of correctable and uncorrectable errors given by the complete decoding for a binary linear code is investigated. New bounds on the error-correction capability of linear codes beyond half the minimum distance are presented, both for the best codes and for arbitrary codes under some restrictions on their parameters. It is proved that some known codes of low rate are as good as the best codes in an asymptotic sense.

Blockwise Repeated Burst Error Correcting Linear Codes

eiris.it

This paper presents a lower and an upper bound on the number of parity check digits required for a linear code that corrects a single subblock containing errors which are in the form of 2-repeated bursts of length b or less. An illustration of such kind of codes has been provided. Further, the codes that correct m-repeated bursts of length b or less have also been studied.

Error-correcting codes over an alphabet of four elements

2000 IEEE International Symposium on Information Theory (Cat. No.00CH37060), 2000

The problem of finding the values of A q (n, d)-the maximum size of a code of length n and minimum distance d over an alphabet of q elements-is considered. Upper and lower bounds on A 4 (n, d) are presented and some values of this function are settled. A table of best known bounds on A 4 (n, d) is given for n ≤ 12. When q ≤ M < 2q, all parameters for which A q (n, d) = M are determined.

Double error Correcting Codes with Improved Code Rates

Journal of Electrical Engineering

In [1] a new family of error detection codes called Weighted Sum Codes was proposed. In [2] it was noted, that these codes are equivalent to lengthened Reed Solomon Codes, and shortened versions of lengthened Reed Solomon codes respectively, constructed over GF(2^(h/2)). It was also shown that it is possible to use these codes for correction of one error in each codeword over GF(2^(h/2)). In [3] a class of modified Generalized Weighted Sum Codes for single error and conditionally double error correction were presented. In this paper we present a new family of double error – correcting codes with code distance dm = 5. Weight spectrum for [59,49,5] code constructed over GF(8) which is an example of the new codes was obtained by computer using its dual [4]. The code rate of the new codes are higher than the code rate of ordinary Reed Solomon codes constructed over the same �finite fi�eld.

Hamming Codes: Error Reducing Techniques

International Journal for Research in Applied Science and Engineering Technology (IJRASET) , 2021

Hamming codes for all intents and purposes are the first nontrivial family of error-correcting codes that can actually correct one error in a block of binary symbols, which literally is fairly significant. In this paper we definitely extend the notion of error correction to error-reduction and particularly present particularly several decoding methods with the particularly goal of improving the error-reducing capabilities of Hamming codes, which is quite significant. First, the error-reducing properties of Hamming codes with pretty standard decoding definitely are demonstrated and explored. We show a sort of lower bound on the definitely average number of errors present in a decoded message when two errors for the most part are introduced by the channel for for all intents and purposes general Hamming codes, which actually is quite significant. Other decoding algorithms are investigated experimentally, and it generally is definitely found that these algorithms for the most part improve the error reduction capabilities of Hamming codes beyond the aforementioned lower bound of for all intents and purposes standard decoding.