New vector quantization-based techniques for reducing the effect of channel noise in image transmission (original) (raw)

An Index Assignment Algorithm for Improving the Transmission of Vector-Quantized Images over a Rayleigh Fading Channel

Anais do XVIII Simpósio Brasileiro de Telecomunicações

A serious problem related to the transmission of vectorquantized images through noisy channels is that whenever errors occur, the nearest neighbor rule is broken. As a consequence, very annoying blocking e ects may appear in the reconstructed images. In the present work, a simple and fast method for organizing the vector quantization (VQ) codebooks is presented. The key idea behind the proposed method is to ensure that similar (dissimilar) binary representations of the indexes of the codevectors correspond to similar (dissimilar) codevectors themselves. It is shown that the organized codebooks improve the performance of the transmission system in the sense that they lead to reconstructed images with better quality when compared to the ones obtained by using non-organized codebooks.

Image Vector Quantization codec indexes filtering

Serbian Journal of Electrical Engineering, 2012

Vector Quantisation (VQ) is an efficient coding algorithm that has been widely used in the field of video and image coding, due to its fast decoding efficiency. However, the indexes of VQ are sometimes lost because of signal interference during the transmission. In this paper, we propose an efficient estimation method to conceal and recover the lost indexes on the decoder side, to avoid re-transmitting the whole image again. If the image or video has the limitation of a period of validity, re-transmitting the data wastes the resources of time and network bandwidth. Therefore, using the originally received correct data to estimate and recover the lost data is efficient in time-constrained situations, such as network conferencing or mobile transmissions. In nature images, the pixels are correlated with their neighbours and VQ partitions the image into sub-blocks and quantises them to the indexes that are transmitted; the correlation between adjacent indexes is very strong. There are two parts of the proposed method. The first is pre-processing and the second is an estimation process. In pre-processing, we modify the order of codevectors in the VQ codebook to increase the correlation among the neighbouring vectors. We then use a special filtering method in the estimation process. Using conventional VQ to compress the Lena image and transmit it without any loss of index can achieve a PSNR of 30.429 dB on the decoder. The simulation results demonstrate that our method can estimate the indexes to achieve PSNR values of 29.084 and 28.327 dB when the loss rate is 0.5% and 1%, respectively.

Codebook organization to enhance maximum a posteriori detection of progressive transmission of vector quantized images over noisy channels

IEEE Transactions on Image Processing, 1996

We describe a new way to organize a full-search vector quantization codebook so that images encoded with it can be sent progressively and have resilience to channel noise. The codebook organization guarantees that the most significant bits (MSB's) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSB's can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by Phamdo and Farvardin. The performance of the scheme is close to that of pseudo-gray coding at lower bit error rates and outperforms it at higher error rates. No extra bits are used for channel error correction.

An error resilient coding scheme for JPEG image transmission based on data embedding and side-match vector quantization

Journal of Visual Communication and Image Representation, 2006

For an entropy-coded Joint Photographic Experts Group (JPEG) image, a transmission error in a codeword will not only affect the underlying codeword but also may affect subsequent codewords, resulting in a great degradation of the received image. In this study, an error resilient coding scheme for JPEG image transmission based on data embedding and side-match vector quantization (VQ) is proposed. To cope with the synchronization problem, the restart capability of JPEG images is enabled. The objective of the proposed scheme is to recover high-quality JPEG images from the corresponding corrupted images. At the encoder, the important data (the codebook index) for each Y (U or V) block in a JPEG image are extracted and embedded into another ''masking'' Y (U or V) block in the image by the odd-even data embedding scheme. At the decoder, after all the corrupted blocks within a JPEG image are detected and located, if the codebook index for a corrupted block can be correctly extracted from the corresponding ''masking'' block, the extracted codebook index will be used to conceal the corrupted block; otherwise, the side-match VQ technique is employed to conceal the corrupted block. Based on the simulation results obtained in this study, the performance of the proposed scheme is better than those of the five existing approaches for comparison. The proposed scheme can recover high-quality JPEG images from the corresponding corrupted images up to a block loss rate (BLR) of 30%.

The Noise Reduction over Wireless Channel Using Vector Quantization Compression and Filtering

International Journal of Electrical and Computer Engineering (IJECE), 2016

The transmission of compressed data over wireless channel conditions represents a big challenge. The idea of providing robust transmission gets a lot of attention in field of research. In this paper we study the effect of the noise over wireless channel. We use the model of Gilbert-Elliot to represent the wireless channel. The parameters of the model are selected to represent three cases of channel. As data for transmission we use images in gray level size 512x512. To minimize bandwidth usage we compressed the image with vector quantization also in this compression technique we study the effect of the codebook in the robustness of transmission so we use different algorithms to generate a codebook for the vector quantization finally we study the restoration efficiency of received image using filtering and indices recovery technique.

Interleaved reception method for restored vector quantization image

TELKOMNIKA (Telecommunication Computing Electronics and Control), 2016

The transmission of image compressed by vector quantization produce wrong blocks in received image which are completely different to the original one which makes the restoration process too difficult because we don't have any information about the original block. As a solution we propose a transmission technique that save the majority of pixels in the original block by building new blocks doesn't contain neighborhood pixels from the original block which increase the probability of restoration. Our proposition is based on decomposition and interleaving. For the simulation we use a binary symmetric channel with different BER and in the restoration process we use simple median filter just to check the efficiency of proposed approach.

Modified Vector Quantization Method for Image Compression

Transactions On Engineering, Computing …, 2006

A low bit rate still image compression scheme by compressing the indices of Vector Quantization (VQ) and generating residual codebook is proposed. The indices of VQ are compressed by exploiting correlation among image blocks, which reduces the bit per index. A residual codebook similar to VQ codebook is generated that represents the distortion produced in VQ. Using this residual codebook the distortion in the reconstructed image is removed, thereby increasing the image quality. Our scheme combines these two methods. Experimental results on standard image Lena show that our scheme can give a reconstructed image with a PSNR value of 31.6 db at 0.396 bits per pixel. Our scheme is also faster than the existing VQ variants.

An improved vector quantization method for image compression

Vector quantization (VQ) is considered a common used image compression method. Due to the uncomplicated process of decompression and high compression ratio, it is widely used for network transmission or medical image storage. However, the reconstructed image after decompression has high distortion. To improve the quality of the reconstructed image without increasing computational complexity, a novel VQ method combined with Block Truncation Coding (BTC) is proposed to reduce the distortion of decompression image. The compressed codes of proposed method for an image block contain the block mean and the index recording the closest residual vector in the codebook is calculated. Moreover, a bit-plan which records the relationship between the pixels and the mean value. With the bit-plan, the residual vector can be consists of positive value only. Because the method of codebook training is a clustering approach, smaller variations within the residual vector make training more accurate. The proposed method is tested using public image. The experimental results show that the proposed method can get better Peak Signal to Noise Ratio (PSNR) without increasing the codebook size and the compression complexity.

A new efficient image compression technique with index-matching vector quantization

IEEE Transactions on Consumer Electronics, 1997

A new efficient image compression technique is presentmecl for low cost, applications, such as multimedia. arid videoconferencing. Since address vector quantization (A-VQ) , proposed by Nasrabadi and Feng for image coding, has t,he main disadvantage of high computational complexity of reordering the address codebook at the transmitter and t,he receiver during encoding of each block, we propose a new efficient approach to overcome this disadvantage. The proposed algorithm is based on tree search vector quantizalioii via. multi-path search and index matching in index codebook, and may achieve a better performance as well as alphabet. Z = { I , 2 ,. .. , N } denote the finitme index alphabet, and A denote the finite reproduct,ion alphahet. We assume A c A. Let. E Ak be an input. vector sour(-e, a.nd let, C = {Cl, C'z,. .. , c."} be a. finite codebook cont,a.ining N codevectors, where N = 2"? R > 0, and for each 1 5 i 5 N, Ci is called the codewmrd or teinplak. A k-D vect,or quantizer Q with rate R is a, mapping Q : A k + C such that, low computational complexity. We theoretically prove t1ia.t. the proposed algorithm is superior to the A-VQ algorithm a.nd experimentally show that a lower bit. rate tha.n that, of where 8 is the encoder, 2, is the decoder, and Ri is the it ,li Voronoi cell with the centroid Ci defined by the .4-VQ is obtained. I. INTRODUCTION Clearly Recently, the topic of d a h compression (or source coding) especially for image and video, ha,s become attra.ctive clue to the demands in some applications, such as videoconf'erencing and multimedia. Vector quantiza.tion (VQ) has heen found to be an efficient coding technique due to i1.s inherent abilit,y to exploit the high correlation between the neighboring pixels. Some excellent survey articles and hooks are given in [l] [a]. Essentially, VQ coding technique can be viewed as a pattern matching method. VQ is a, block coding procedure by which blocks of k samples from a. given data, source are approximated by vector pa.tterns or t,einplat,es from a. set, of code vectors, commonly called a. codebook. VQ is widely used in image/video and speech compression applications, because simple table look-up encoding arid decoding procedures may be used. In this paper, 2-dimensional (2-D) informat.ion source is considered since a 2-D raster scan of the image is adopted. Let d denote the nonempty finite discrete source U Ri = Ak and R, n R, = (d if i # j. Here, k = 4 x 4 = 16 since 4 x 4 block c.otling is assumed. During the encoding of a digit,al image, t.he best, possible match (mininiuin dist,ortion, e.g., miniiriuni Euclidean distance) is extracted t,o represent the input vect,or. The codeword index i , i E Z = { 1 , 2 ,. .. , N } , is then t,ransinitted to the receiver where index i is decoded by a simple t,able look-up decoding process. Act,ually, the codebook is the key part of the vector quantizer. There are several different, approaches to the codebook design. A popular and well-known codebook design procedure. proposed by Linde. B w o , and Gray (LBG) [3], is a geiieralized (or vect,or) version of the Lloyd clustering algorithm for a scalar quantizer. 1n t.he st,andard rriemoryless VQ tmeclinique. t,he pixel (or intrablock) correlation is esploit.ed but. the int.erblock c-orrelat.ion is totally ignored. The interblock correla.t,ioii