Image compression (Computer Science) Research Papers (original) (raw)
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the... more
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of
a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An
optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is
also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels.
The paper attempts a comparison between JPEG and JPEG2000 –wavelet- based image compression based on the output from different images. Also the PSNR values are compared for different images. The paper also reviews the recent advancements... more
The paper attempts a comparison between JPEG and JPEG2000 –wavelet- based image compression based on the output from different images. Also the PSNR values are compared for different images. The paper also reviews the recent advancements in this area after the introduction of JPEG2000 so as to bring out the further research prospects in the field of image compression.
Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and... more
Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and communication bandwidth requirements of those images. To do this, there are many types of wavelet has been used for fingerprint image compression. In this paper we have used Coiflet-Type wavelets and our aim is to determine the most appropriate Coiflet-Type wavelet for better compression of a digitized fingerprint image and to achieve our goal Retain Energy (RE) and Number of Zeros (NZ) in percentage is determined for different oiflet-Type wavelets at different threshold values at the fixed decomposition level 3 using wavelet and wavelet packet transform. We have used 8-bit grayscale left thumb digitized fingerprint image of size 480×400 as a test image.
This paper presents a new hybrid scheme for image data compression using quadtree decomposition and parametric line fitting. In the first phase of encoding, the input image is partitioned into quadrants using quadtree decomposition. To... more
This paper presents a new hybrid scheme for image data compression using quadtree decomposition and parametric line fitting. In the first phase of encoding, the input image is partitioned into quadrants using quadtree decomposition. To prevent from very small quadrants, a constraint of minimal block size is imposed during quadtree decomposition. Homogeneous quadrants are separated from non-homogeneous quadrants. In the second phase of encoding, the non-homogeneous quadrants are scanned row-wise. Luminance variation of each scanned row is fitted using parametric line at specified level of tolerance. The output data is entropy coded. Experimental results show that the proposed scheme performs better than well-known lossless image compression techniques for several types of synthetic images, e.
Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and... more
Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and communication bandwidth requirements of those images. To do this, there are many types of wavelet has been used for fingerprint image compression. In this paper we have used Coiflet-Type wavelets and our aim is to determine the most appropriate Coiflet-Type wavelet for better compression of a digitized fingerprint image and to achieve our goal Retain Energy (RE) and Number of Zeros (NZ) in percentage is determined for different oiflet-Type wavelets at different threshold values at the fixed decomposition level 3 using wavelet and wavelet packet transform. We have used 8-bit grayscale left thumb digitized fingerprint image of size 480×400 as a test image.
Imaging plays a critical role in health care system, but unfortunately exhausted lager bytes which affected both storage and communication, medical image compression reduce the amount of image data while keeping the image compressed... more
Imaging plays a critical role in health care system, but unfortunately exhausted lager bytes which affected both storage and communication, medical image compression reduce the amount of image data while keeping the image compressed identically to the original one that characterized by low compression ratio since where keeping all the information is prioritised. This paper is concerned with improving the performance of medical image compression of fixed prediction based, by incorporating the hierarchical scheme of even/odd based along the mixing prediction of quadrants. The test result indicates the superiority of the proposed system compared to the traditional fixed coding with preserving image identically.
M.P.) SGSITS, Indore (M.P.) SGSITS, Indore (M.P.)
In this paper we have implemented a contour extraction and compression from digital medical image (X-ray & CT scan) and is proposed by using the most significant bit (MSB), maximum gray level (MGL), discrete cosine transform (DCT), and... more
In this paper we have implemented a contour extraction and compression from digital medical image (X-ray & CT scan) and is proposed by using the most significant bit (MSB), maximum gray level (MGL), discrete cosine transform (DCT), and discrete wavelet transform (DWT). Transforms depend on different methods of contour
extraction like Sobel, Canny and SSPCE (single step parallel contour extraction) methods. To remove the noise from the medical image the pre-processing stage (filtering by median & enhancement by linear contrast stretch) is performed. The extracted contour is compressed using well-known method (Ramer). Experimental results and analysis show that proposed algorithm is trustworthy in establishing the ownership. Signal-to-noise ratio (SNR), mean square error (MSE), and compression ratio (CR) values obtained from MSB, MGL, DCT & DWT methods are compared. Experimental results show that the contours of the original medical image can be extracted easily with few contour points at high compression exceeded to 87% in some cases. The simplicity of the method with accepted level of the reconstruction is the main advantage of the proposed algorithm. The results indicate that this method improves the contrast of medical images and can help with better diagnosis after contour extraction. This proposed method is very useful for real time application.
The main objective is to design a VLSI architecture to realize an wireless endoscopy system. This system is used in medicinal field by recording images of the digestive system. It is developed to transfer the image data using the RF... more
The main objective is to design a VLSI architecture to realize an wireless endoscopy system. This system is used in medicinal field by recording images of the digestive system. It is developed to transfer the image data using the RF transmission so as to avoid the pain and irritation to the digestive tract which can be caused by the cables when using conventional endoscopes. The proposed system consists of a RF transceiver and a CMOS color image sensor. Capturing of images is done by image color sensor. RF transceiver is used to send the images wirelessly. CMOS color image sensor is interfaced with FPGA. Real time images captured by the color image sensor are compressed and they are sent wirelessly by the RF transceiver. Image compression is used since it is the best way to save the power in transmission and reception and to decrease the bandwidth in communication. So, after capturing the image using the CMOS color image sensor, the image is compressed. Compression is done by JPEG standard. After high-quality lossy compression, the image is transmitted by the wireless RF transceiver. Wireless transceiver is chosen in such a way that it is operated in low power.
This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking.... more
This paper presents a new compression technique and image watermarking algorithm based on Contourlet Transform (CT). For image compression, an energy based quantization is used. Scalar quantization is explored for image watermarking. Double filter bank structure is used in CT. The Laplacian Pyramid (LP) is used to capture the point discontinuities, and then followed by a Directional Filter Bank (DFB) to link point
discontinuities. The coefficients of down sampled low pass version of LP decomposed image are re-ordered in a
pre-determined manner and prediction algorithm is used to reduce entropy (bits/pixel). In addition, the coefficients of CT are quantized based on the energy in the particular band. The superiority of proposed algorithm to JPEG is observed in terms of reduced blocking artifacts. The results are also compared with wavelet transform (WT). Superiority of CT to WT is observed when the image contains more contours. The
watermark image is embedded in the low pass image of contourlet decomposition. The watermark can be
extracted with minimum error. In terms of PSNR, the visual quality of the watermarked image is exceptional.
The proposed algorithm is robust to many image attacks and suitable for copyright protection applications.
In the paper is presented a new method for efficient lossless documents image compression, based on the Inverse Difference Pyramid (IDP) decomposition. The method is aimed at the processing of graphic and text grayscale and color images.... more
In the paper is presented a new method for efficient lossless documents image compression, based on the Inverse Difference Pyramid (IDP) decomposition. The method is aimed at the processing of graphic and text grayscale and color images. The IDP decomposition is presented in brief and the method ability to process efficiently different kinds of images are presented. In the paper are included the results of the compression of graphics and texts and they are compared with those obtained with JPEG2000 and other widely used lossless compression methods.
- by Vladimir Todorov and +1
- •
- Image Processing, Soft Computing, Data Compression, Machine Vision
We cannot operate the whole image directly in some applications like image recognition or image compression because it is inefficient method. Therefore, before recognition or compression, many image segmentation algorithms were suggested... more
We cannot operate the whole image directly in some applications like image recognition or image compression because it is inefficient method. Therefore, before recognition or compression, many image segmentation algorithms were suggested to segment an image. Image segmentation operation is to rate or cluster an image into many regions depending on the feature of image. A hybrid image compression algorithm is suggested in this paper which segments the image into background and foreground parts and after that compresses the foreground image by using DCT technique. The foreground image is provided more importance than the background region. The color based segmentation method is used to segment image. The observed parameters are compression ratio (CR), mean square error (MSE), and peak signal to noise ratio (PSNR). The goal is to maximize the CR while preserving images' information. The foreground image has a good quality in this the proposed hybrid method.
One field where computer-related Image processing technology shows great promise for the future is bionic implants such as Cochlear implants, Retinal implants etc.. Retinal implants are being developed around the world in hopes of... more
One field where computer-related Image processing technology shows great promise for the future is bionic implants such as Cochlear implants, Retinal implants etc.. Retinal implants are being developed around the world in hopes of restoring useful vision for patients suffering from certain types of diseases like Age-related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP). In these diseases the photoreceptor cells slowly degenerated, leading to blindness. However, many of the inner retinal neurons that transmit signals from the photoreceptors to the brain are preserved to a large extent for a prolonged period of time. The Retinal Prosthesis aims to provide partial vision by electrically
activating the remaining cells of the retina. The Epi retinal prosthesis system is composed of two units, extra ocular unit and intraocular implant. The two units are connected by a telemetric inductive link. The Extraocular unit consists of a CCD camera, an image processor, an encoder, and a transmitter
built on the eyeglass. High-resolution image from a CCD camera is reduced to lower resolution matching the array of electrodes by image processor, which is then encoded into bit stream. Each electrode in an implant corresponds to one pixel in an image. The bit stream is modulated on a 22 MHz
carrier and then transmitted wirelessly to the inside implant. This paper mainly discusses two approaches in image processing which reduces the size of the image without loss of the object detection rate to that of the original image. One is about the related image processing algorithms include image
resizing, color erasing, edge enhancement and edge detection. Second one is to generate the saliency map for an image.
The main objective is to design a VLSI architecture to realize an wireless endoscopy system. This system is used in medicinal field by recording images of the digestive system. It is developed to transfer the image data using the RF... more
The main objective is to design a VLSI architecture to realize an wireless endoscopy system. This system is used in medicinal field by recording images of the digestive system. It is developed to transfer the image data using the RF transmission so as to avoid the pain and irritation to the digestive tract which can be caused by the cables when using conventional endoscopes. The proposed system consists of a RF transceiver and a CMOS color image sensor. Capturing of images is done by image color sensor. RF transceiver is used to send the images wirelessly. CMOS color image sensor is interfaced with FPGA. Real time images captured by the color image sensor are compressed and they are sent wirelessly by the RF transceiver. Image compression is used since it is the best way to save the power in transmission and reception and to decrease the bandwidth in communication. So, after capturing the image using the CMOS color image sensor, the image is compressed. Compression is done by JPEG s...
This paper presents a new hybrid scheme for image data compression using quadtree decomposition and parametric line fitting. In the first phase of encoding, the input image is partitioned into quadrants using quadtree decomposition. To... more
This paper presents a new hybrid scheme for image data compression using quadtree decomposition and parametric line fitting. In the first phase of encoding, the input image is partitioned into quadrants using quadtree decomposition. To prevent from very ...
The objective of an image compression algorithm is to exploit the redundancy in an image such that a smaller number of bits can be used to represent the image while maintaining an “acceptable” visual quality for the decompressed image.... more
The objective of an image compression algorithm is to exploit the redundancy in an image such that a smaller number of bits can be used to represent the image while maintaining an “acceptable” visual quality for the decompressed image. The embedded zero tree wavelet algorithms (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding in combination with Huffman and Arithmetic encoder for further compression. Implementation of Huffman coding followed by arithmetic compression gives another 15% extra compression ratio.
Image compression is internationally recognized up to the minute tools for decrease the communication bandwidth and save the transmitting power. It should reproduce a good quality image after compressed at low bit rates. Set partitioning... more
Image compression is internationally recognized up to the minute tools for decrease the communication bandwidth and save the transmitting power. It should reproduce a good quality image after compressed at low bit rates. Set partitioning in hierarchical trees (SPIHT) is wavelet based computationally very fast and among the best image compression based transmission algorithm that offers good compression ratios, fast execution time and good image quality. Precise Rate Control (PRC) is the distinct characteristic of SPIHT. Image compression-based on Precise Rate Control and fast coding time are principally analyzed in this paper. Experimental result shows that, in the case of low bit-rate, the modified algorithm with fast Coding Time and Precise Rate Control can reduce the execution time and improves the quality of reconstructed image in both PSNR and perceptual when compare to at the same low bit rate.
Neighborhood coding was proposed to encode binary images. Previously, this coding scheme presented good results in the problem of handwritten character recognition. In this article, we extended this coding scheme so that it can be applied... more
Neighborhood coding was proposed to encode binary images. Previously, this coding scheme presented good results in the problem of handwritten character recognition. In this article, we extended this coding scheme so that it can be applied as an image shape descriptor and in a bilevel image compression method. An algorithm to reduce the number of codes needed to reconstruct the image without loss of information is presented. Using the exactly same set of reduced codes, a lossless compression method and a shape recognition system are proposed. The reduced codes are used with Huffman coding and RLE (Run-Length Encoding) to obtain a compression rate comparable to well-known image compression algorithms such as LZW and CCITT Group 4. For the shape recognition task we applied a template matching algorithm to the set of strings generated by the coding reduction procedure. We tested this method in the MPEG-7 Core Experiment Shape 1 part A2 and the binary image compression challenge database.
In the paper is presented a new method for efficient lossless documents image compression, based on the Inverse Difference Pyramid (IDP) decomposition. The method is aimed at the processing of graphic and text grayscale and color images.... more
In the paper is presented a new method for efficient lossless documents image compression, based on the Inverse Difference Pyramid (IDP) decomposition. The method is aimed at the processing of graphic and text grayscale and color images. The IDP decomposition is presented in brief and the method ability to process efficiently different kinds of images are presented. In the paper are included the results of the compression of graphics and texts and they are compared with those obtained with JPEG2000 and other widely used lossless compression methods.
- by Vladimir Todorov and +1
- •
- Image Processing, Soft Computing, Data Compression, Machine Vision
Image compression (IC) plays an important part in Digital Image Processing (DIP), it is as well very very essential for effective transmission and storing of images. Image Compression (IC), is basically recusing the size of an image and... more
Image compression (IC) plays an important part in Digital Image Processing (DIP), it is as well very very essential for effective transmission and storing of images. Image Compression (IC), is basically recusing the size of an image and that too without adjusting the quality of the picture. It is kind of software with records pressure on digital Image. The objective is to lessen reiteration of the picture info for you to be accomplished of store or transmit information in a proficient shape. This paper gives review of kinds of images and its compression strategies. An image, in its genuine form, conveys big extent of data which requiress no longer finest large quantity of memory provisions for its garage but moreover causes difficult transmission over limited bandwidth channel. So, one of the acute factors for picture storage space or transmission over any exchange media is Image Compression. Image Compression makes it possible for increasing file sizes of practicable, storable and communicable dimensions.
Image compression is internationally recognized up to the minute tools for decrease the communication bandwidth and save the transmitting power. It should reproduce a good quality image after compression at low bit rates. Set partitioning... more
Image compression is internationally recognized up to the minute tools for decrease the communication bandwidth and save the transmitting power. It should reproduce a good quality image after compression at low bit rates. Set partitioning in hierarchical trees (SPIHT) is wavelet based computationally very fast and among the best image compression based transmission algorithm that offers good compression ratios, fast execution time and good image quality. Precise Rate Control (PRC) is the distinct characteristic of SPIHT. Image compression-based on Precise Rate Control and fast coding time are principally analyzed in this paper. Experimental result shows that, in the case of low bit-rate, the modified algorithm with fast Coding Time and Precise Rate Control can reduce the execution time and improves the quality of reconstructed image in both PSNR and perceptual when compare to at the same low bit rate.
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the... more
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels.
The fast growth in digital image applications such as web sites, multimedia and even personal image archives encouraged researchers to develop advanced techniques to compress images. Many compression techniques where introduced whether... more
The fast growth in digital image applications such as web sites, multimedia and even personal image archives encouraged researchers to develop advanced techniques to compress images. Many compression techniques where introduced whether reversible or not. Most of those techniques were based on statistical analysis of repetition or mathematical transforming to reduce the size of the image. This research is concerning in applying Genetic programing (GP) technique in image compression. In order to achieve that goal, a parametric study was carried out to determine the optimum combination of (GP) parameters to achieve maximum quality and compression ratio. For simplicity the study considered 256 level gray scale image. A special C++ software was developed to carry out all calculations, the compressed images was rendered using Microsoft Excel. Study results was compared with JPEG results as one of the most popular lossy compression techniques. It is concluded that using optimum (GP) parameters leads to acceptable quality (objectively and subjectively) corresponding to compression ratio ranged between 2.5 and 4.5.
Neighborhood coding was proposed to encode binary images. Previously, this coding scheme presented good results in the problem of handwritten character recognition. In this article, we extended this coding... more
Neighborhood coding was proposed to encode binary images.
Previously, this coding scheme presented good results in the
problem of handwritten character recognition. In this article, we extended this coding scheme so that it can be applied as an image shape descriptor and in a bilevel image compression method. An algorithm to reduce the number of codes needed to reconstruct the image without loss of information is presented. Using the exactly same set of reduced codes, a lossless compression method and a shape recognition system are proposed. The reduced codes are used with Huffman coding and RLE to obtain a compression rate comparable to well-known image compression algorithms such as LZW and CCITT Group 4. For the shape recognition task we applied a template matching algorithm to the set of strings generated by the coding reduction procedure. We tested this method in a MPEG-7 Core Experiment Shape 1 part A2 and the binary image compression challenge database.