Image compression (Computer Science) Research Papers (original) (raw)

2025, EAI Endorsed Transactions on AI and Robotics

The focus of this paper is to describe the development and the architecture of a new parallel algorithm targeted for image compression within High Performance Computing context. The suggested algorithm apply parallel processing strategies... more

The focus of this paper is to describe the development and the architecture of a new parallel algorithm targeted for image compression within High Performance Computing context. The suggested algorithm apply parallel processing strategies for the image data set in order to minimize the amount of computation while at the same time optimize the compression ratio and speed. When combining new parallelism strategies with newly developed and existing methods of data compression, the result is visibly better in both, compression ratio and time, as opposed to comparable existing algorithms. Experimental results performed on different HPC environment prove that the solution put forward is quite scalable and efficient; therefore, it should be considered in applications where real time image processing is crucial. The novelty of this work lies in the design and implementation of a parallel algorithm that simultaneously integrates bitplane coding, wavelet transforms, and predictive coding within a scalable HPC framework-an approach not yet addressed comprehensively in existing methods. This hybrid strategy provides significant improvements in compression ratio, execution time, and scalability when compared to state-of-the-art image compression techniques.

2025, Acta Polytechnica Hungarica

Simultaneous localization and mapping (SLAM) has been highly studied in the last decade. It allows the estimation of the camera pose of a mobile device and the creation of a map of the surrounding environment concurrently. Recently,... more

Simultaneous localization and mapping (SLAM) has been highly studied in the last decade. It allows the estimation of the camera pose of a mobile device and the creation of a map of the surrounding environment concurrently. Recently, Visual SLAM (VSLAM) has become the most widely used state-of-the-art technique to implement SLAM tasks due to its reduced cost, lower size, and affordability. However, the intensive computation of VSLAM systems does not fit in a wide range of limited resources and energy mobile devices. A possible solution is to split its functionality between mobile devices and the edge cloud. This solution showed the necessity for efficient visual data compression methods to be integrated within VSLAM systems. This work proposes a multi-level encoding method for visual data frame compression integrated within the monocular Oriented FAST and Rotated BRIEF-SLAM (ORB-SLAM) system. The performance results of the proposed system are compared to corresponding ORB-SLAM systems adopting the most popular classical still image compression standards; the Joint Photographic Experts Group (JPEG) and the advanced version, the JPEG 2000, in terms of reconstruction quality, robot's trajectory estimation, and computational complexity.

2025, 2025 26th International Carpathian Control Conference (ICCC)

Visual Simultaneous Localization and Mapping (VSLAM) is a crucial algorithm used in mobile robots to determine their position relative to the encirclement environment. In multirobot architectures, mobile robots communicate with a base... more

Visual Simultaneous Localization and Mapping (VSLAM) is a crucial algorithm used in mobile robots to determine their position relative to the encirclement environment. In multirobot architectures, mobile robots communicate with a base station or a centralized edge device. However, the computational complexity nature of the VSLAM algorithm, due to the processing of massive visual information for tracking and mapping, poses challenges for deploying the algorithm entirely on the mobile robot. To address these limitations, this paper introduces an efficient architecture that partitions the V-SLAM framework between mobile robots and a centralized edge. This approach complies with resource limitations and optimizes energy consumption while minimizing the robot's size and weight. In addition, the paper proposes a communication module comprising an encoding and decoding framework to obtain effective data communication between the mobile robots and the edge device. The performance of the proposed system is evaluated and compared with a corresponding architecture that employs the baseline JPEG technique in terms of trajectory accuracy, data quality, and execution time.

2025, WhatIf.Rocks

We propose the Recursive Harmonic Systems (RHS) framework as a unified theoretical paradigm connecting quantum mechanics, gravity, electromagnetism, and a hypothesized fifth force via scalar harmonic resonance. At its core lies a "Divine... more

2025

»The fact that we live in a time when clouds can be calculated in all their randomness thanks to Mandelbrot's fractals and then appear on a screen as calculated, unfilmed images distinguishes the present from any previous time.« (Kittler... more

»The fact that we live in a time when clouds can be calculated in all their randomness thanks to Mandelbrot's fractals and then appear on a screen as calculated, unfilmed images distinguishes the present from any previous time.« (Kittler 2002:37)

2025, International Journal of Creative Research Thoughts (IJCRT)

In MRI digital images have multivariate texture complexities, there is a need for a compression algorithm that is effective in the compression of such images with improved quality of the compressed image and successfully reduced blocking... more

In MRI digital images have multivariate texture complexities, there is a need for a compression algorithm that is effective in the compression of such images with improved quality of the compressed image and successfully reduced blocking artifacts. This paper gives an efficient algorithm that compresses an MRI image by dividing the image into 8×8 high texture image blocks then 16×16 low texture image blocks using image texture adaptive non-uniform rectangular partition and a transform coding blocking of different sizes which finally uses a flexible adaptive quantization scheme while considering human visual system (HSV). The algorithm discussed in this paper is more efficient than the DCT algorithm and is more tolerant to low bit rate images create the compressed image to a high quality and conquer all the potential blocking artifacts that develop with DCT algorithm image compression.

2025, Asian Journal of Applied Science and Technology (AJAST)

Digital image compression is a modern technology which comprises of wide range of use in different fields as in machine learning, medicine, research and many others. Many techniques exist in image processing. This paper aims at the... more

Digital image compression is a modern technology which comprises of wide range of use in different fields as in machine learning, medicine,
research and many others. Many techniques exist in image processing. This paper aims at the analysis of compression using Discrete Cosine
Transform (DCT) by using special methods of coding to produce enhanced results. DCT is a technique or method used to transform pixels of an
image into elementary frequency component. It converts each pixel value of an image into its corresponding frequency value. There has to be a
formula that has to be used during compression and it should be reversible without losing quality of the image. These formulae are for lossy and
lossless compression techniques which are used in this project. The research test Magnetic Resonance Images (MRI) using a set of brain images.
During program execution, original image will be inserted and then some algorithms will be performed on the image to compress it and a
decompressing algorithm will execute on the compressed file to produce an enhanced lossless image.
Keywords: Image compression, Discrete cosine transform, Magnetic resonance images, Entropy.

2025, Irish Interdisciplinary Journal of Science & Research (IIJSR)

Compression can be defined as an art form that involves the representation of information in a reduced form when compared to the original information. Image compression is extremely important in this day and age because of the increased... more

Compression can be defined as an art form that involves the representation of information in a reduced form when compared to the original
information. Image compression is extremely important in this day and age because of the increased demand for sharing and storing
multimedia data. Compression is concerned with removing redundant or superfluous information from a file to reduce the size of the file. The
reduction of the file size saves both memory and the time required to transmit and store data. Lossless compression techniques are
distinguished from lossy compression techniques, which are distinguished from one another. This paper focuses on the literature studies on
various compression techniques and the comparisons between them.
Keywords: Image processing, MRI images, Compression, Transform.

2025, IIJSR

The growing demand for medical image storage and transmission has resulted in a shortage of memory and bandwidth. Compression was used to find these issues. Clinical image compression is used to improve image quality, reduce bit rate,... more

The growing demand for medical image storage and transmission has resulted in a shortage of memory and bandwidth. Compression was used to find these issues. Clinical image compression is used to improve image quality, reduce bit rate, increase compression efficiency for storage and transmission, and reduce cost. MRI images are quite clinical. To store and transmit thousands of MRI pictures, you need a lot of storage space and bandwidth. Thus, high-quality MRI image compression is more research focused. Many Compression strategies for MRI with low compression rate cause loss of data on lesions, and lead to misdiagnose. This research proposed several MRI image compression methods. Our main goals are to provide more compressed clinical images, encourage early location and finding followed by therapy using multi-resolution compression technology. A two-dimensional (2D) picture arrangement is created by first converting 3D MRI scans into 2D images. Then range and area blocks are arranged by 3D object's spatio-temporal similarity. In addition, the proposed technique uses wavelet transform and MRG algorithm to analyse the performance of wavelet contains transformation, quantization, and entropy coding to compress the most significant piece of ROI using DWT and s. It compresses non-ROI using DCT and MHE (merging based Huffman encoding). Finally, residual compensation is used to provide good decompression quality MRI compression.

2025

Alzheimer's disease (AD) is among the neurological diseases (dementia) that afflict the elderly most frequently. We introduce a novel machine learning-based approach in this research to differentiate individuals with the early AD... more

Alzheimer's disease (AD) is among the neurological diseases (dementia) that afflict the elderly most frequently. We introduce a novel machine learning-based approach in this research to differentiate individuals with the early AD classification. Preprocessing, feature selection, training data, and classifiers all affect the outcomes of machine learning-based methods for classifying AD. A novel composite comprehensive MRI development of Alzheimer's disease is provided in this chapter (AD-DCP-MRI). The results were analyzed in terms of accuracy, precision, recall, and F1-score using the data package that included T1-weighted MRI clinical OASIS temporal data. Our recommendation model is effective for AD categorization, as evidenced by its increased accuracy. These methods can also be successfully applied in the medical field to help with the early identification and diagnosis of disease.

2025, Bionics of Intelligence 1(100): pp 23-35

The methods of image compression for storage in decentralized blockchain repositories, specifically in the IPFS system, are examined. A methodology for evaluating the efficiency of different image compression algorithms is proposed,... more

The methods of image compression for storage in decentralized blockchain repositories, specifically in the IPFS system, are examined. A methodology for evaluating the efficiency of different image compression algorithms is proposed, incorporating both lossless compression (PNG, TIFF, GIF) and lossy compression (JPEG, WEBP). The comparison includes metrics such as file size, image quality (measured by PSNR), upload time to IPFS, and bandwidth for each compression method. Each image is hashed, and the hash is stored in a smart contract within a local blockchain network deployed using Ganache. Such methods enable the optimization of image storage and transfer within blockchain repositories, ensuring a balance between image quality, file size, and access speed.

2024

This document presents a novel diffusion-based video compression technique. We leverage the inherent expressiveness, photorealism and 3D awareness of denoising diffusion generative AI models as a powerful general-purpose prior that only... more

This document presents a novel diffusion-based video compression technique. We leverage the inherent expressiveness, photorealism and 3D awareness of denoising diffusion generative AI models as a powerful general-purpose prior that only requires small complement of low-quality guidance data (or "hints") to produce video with high spatio-temporal perceptual quality and optional novel view synthesis. Our use of small, finetuned, low-rank adaptations (e.g., LoRA) efficiently compresses a batch of frames and, when combined with the general purpose base model, allows compression that can significantly exceed the performance of state-of-the-art methods such as H.264 and H.265 in terms of perceptual and technical quality to compression size ratios.

2024, IEEE Access

Amidst the global health crisis, the importance of face masks persists as a crucial measure in mitigating contagious diseases. Wearing face masks is advised as a comprehensive strategy to lessen the transmission of infectious and airborne... more

Amidst the global health crisis, the importance of face masks persists as a crucial measure in mitigating contagious diseases. Wearing face masks is advised as a comprehensive strategy to lessen the transmission of infectious and airborne diseases. Face masks are required for entry into many hospitals, offices, and other establishments. The proposed Smart Surveillance System will assist in identifying individuals who are not wearing face masks to enforce this. To address this issue, we propose a face mask detection system that uses convolutional neural networks (CNN) and is reliable, lightweight, accurate, and efficient. We train and test the custom model-BIDNet-on the 'Face Mask Detection~12K Images Dataset' (FMD-12KID) and evaluate its performance on two additional datasets: 'Face Mask Detection Dataset' (FMDD) and 'Face Mask Lite Dataset' (FMLD). An impressive 98.99% classification accuracy is achieved on FMD-12KID. BIDNet outperforms all other existing models on the comparative analysis evaluating the Accuracy, F1-Score (FS), Recall Score (RS), Precision Score (PS), Training time, Prediction Time, and Number of Parameters. For real-time capability, BIDNet is deployed on a web application where it performs extraordinarily, proving its capabilities in real-time scenarios. This research serves as a significant step toward developing intelligent systems that support public health regulations and safeguard communities in times of emergency. This is a major breakthrough in smart surveillance systems, utilizing deep learning to address critical health problems such as face mask detection. The effectiveness of CNN in practical scenarios is demonstrated by this study.

2024

Απαγορεύεται η αντιγραφή, αποθήκευση και διανομή της παρούσας εργασίας, εξ ολοκλήρου ή τμήματος αυτής, για εμπορικό σκοπό. Επιτρέπεται η ανατύπωση, αποθήκευση και διανομή για σκοπό μη κερδοσκοπικό, εκπαιδευτικής ή ερευνητικής φύσης, υπό... more

Απαγορεύεται η αντιγραφή, αποθήκευση και διανομή της παρούσας εργασίας, εξ ολοκλήρου ή τμήματος αυτής, για εμπορικό σκοπό. Επιτρέπεται η ανατύπωση, αποθήκευση και διανομή για σκοπό μη κερδοσκοπικό, εκπαιδευτικής ή ερευνητικής φύσης, υπό την προϋπόθεση να αναφέρεται η πηγή προέλευσης και να διατηρείται το παρόν μήνυμα. Ευχαριστίες. Θα ήθελα καταρχήν να ευχαριστήσω τους γονείς μου για την καθοδήγηση και την ηθική συμπαράσταση που μου προσέφεραν όλα αυτά τα χρόνια.

2024, International Journal of Computer Applications

Edge detection are very important in the digital world to find the information about an object like range, boundaries, level,, hidden and missing fields or edges etc. There are very few approaches and algorithms to show the finer details... more

Edge detection are very important in the digital world to find the information about an object like range, boundaries, level,, hidden and missing fields or edges etc. There are very few approaches and algorithms to show the finer details of an object. The proposed edge detection technique explores the hidden edges in a very simpler manner.

2024

The Versatile Video Coding standard (VVC) is actually under development by the Joint Video Exploration Team (or JVET). An Adaptive Multiple Transforms (AMT) approach was announced. It is based on different transform modules that provided... more

The Versatile Video Coding standard (VVC) is actually under development by the Joint Video Exploration Team (or JVET). An Adaptive Multiple Transforms (AMT) approach was announced. It is based on different transform modules that provided an efficient coding. However, the AMT solution raises several issues especially regarding the complexity of the selected set of transforms. This can be an important issue, particularly for a future industrial adoption. This paper proposed an efficient hardware implementation of the most used transform in AMT approach: the DCT II. The developed circuit is adapted to different block sizes and can reach a minimum frequency of 192 MHz allowing an optimized execution time.

2024, Computers, Materials & Continua

The transmission of video content over a network raises various issues relating to copyright authenticity, ethics, legality, and privacy. The protection of copyrighted video content is a significant issue in the video industry, and it is... more

The transmission of video content over a network raises various issues relating to copyright authenticity, ethics,
legality, and privacy. The protection of copyrighted video content is a significant issue in the video industry, and
it is essential to find effective solutions to prevent tampering and modification of digital video content during its
transmission through digital media. However, there are still many unresolved challenges. This paper aims to address
those challenges by proposing a new technique for detecting moving objects in digital videos, which can help prove
the credibility of video content by detecting any fake objects inserted by hackers. The proposed technique involves
using two methods, the H.264 and the extraction color features methods, to embed and extract watermarks in
video frames. The study tested the performance of the system against various attacks and found it to be robust.
The evaluation was done using different metrics such as Peak-Signal-to-Noise Ratio (PSNR), Mean Squared Error
(MSE), Structural Similarity Index Measure (SSIM), Bit Correction Ratio (BCR), and Normalized Correlation.
The accuracy of identifying moving objects was high, ranging from 96.3% to 98.7%. The system was also able to
embed a fragile watermark with a success rate of over 93.65% and had an average capacity of hiding of 78.67. The
reconstructed video frames had high quality with a PSNR of at least 65.45 dB and SSIM of over 0.97, making them
imperceptible to the human eye. The system also had an acceptable average time difference (T = 1.227/s) compared
with other state-of-the-art methods.

2024, ArXiv

Recent advances in capsule endoscopy systems have introduced new methods and capabilities. The capsule endoscopy system, by observing the entire digestive tract, has significantly improved diagnosing gastrointestinal disorders and... more

Recent advances in capsule endoscopy systems have introduced new methods and capabilities. The capsule endoscopy system, by observing the entire digestive tract, has significantly improved diagnosing gastrointestinal disorders and diseases. The system has challenges such as the need to enhance the quality of the transmitted images, low frame rates of transmission, and battery lifetime that need to be addressed. One of the important parts of a capsule endoscopy system is the image compression unit. Better compression of images increases the frame rate and hence improves the diagnosis process. In this paper a high precision compression algorithm with high compression ratio is proposed. In this algorithm we use the similarity between frames to compress the data more efficiently.

2024, 2012 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)

Lossless image coding is a very important technique to efficiently and perfectly reducing image storage and having many application fields such as: medical images, seismic data, digital archives and digital documentations. In this paper,... more

Lossless image coding is a very important technique to efficiently and perfectly reducing image storage and having many application fields such as: medical images, seismic data, digital archives and digital documentations. In this paper, we present a new approach of a lossless binary image coding which combines two coding process. The first one is based a neighbourhood code reduction which transforms binary image into a proper mapping representation. The second one uses arithmetic coding which produces bitstream with a good compression ratio. The performance of the algorithm is tested on different binary images that contain Arabic letters and compared to some algorithms such as LZW and CCITT Group 4.

2024, Acta Polytechnica Hungarica

This paper provides edge detection analysis on images, which consist of different numbers of details (small, medium and high number of details) and which are compressed by different compression algorithms-JPEG, JPEG2000 and SPIHT. Images... more

This paper provides edge detection analysis on images, which consist of different numbers of details (small, medium and high number of details) and which are compressed by different compression algorithms-JPEG, JPEG2000 and SPIHT. Images from the BSD (Berkeley Segmentation Database) database were used and compressed with different number of bits per pixel. The analysis was performed for five edge detectors: Canny, LoG, Sobel, Prewitt, and Roberts. The fidelity of the detected edges was determined using the objective measures Figure of Merit (FOM), F measure and Performance Ratio (PR), where the reference value was taken from the GroundTruth image. Based on the results presented in the tables, it can be concluded that edge detection behaves differently depending on the number of bits per pixel and applied compression algorithm, as well as, the number of details in the image. Roberts operator has been proven to be the best solution, when it is necessary to perform better edge detection over compressed images with small a number of details, but Canny shows better results for images with a high number of details.

2024, Telfor Journal

The aim of this paper is to examine the impact of successive B frames using compression techniques (H.264 and H.265) and resolution (4K, 2K, Full HD and SD) when changing their quality in a television system. The analysis was done for a... more

The aim of this paper is to examine the impact of successive B frames using compression techniques (H.264 and H.265) and resolution (4K, 2K, Full HD and SD) when changing their quality in a television system. The analysis was done for a case when the number of successive B frames in the Group of Pictures (GOP) was B = 0 and B = 4. The results obtained for analysis parameters: compression ratio, Peak Signal Noise Ratio (PSNR), bitrate, P frames and B frames are presented by tables and graphics, based on which conclusions were made.

2024, Proceedings of the Estonian Academy of Sciences

The influence of different wavelet transformations and decomposition on edge detection was examined, using convenient operators to images of various complexities. Berkeley Segmentation Database images with the corresponding ground truth... more

The influence of different wavelet transformations and decomposition on edge detection was examined, using convenient operators to images of various complexities. Berkeley Segmentation Database images with the corresponding ground truth were used. The categorization of those images was accomplished according to the degree of complexity in three groups (small, medium, and large number of details), by using discrete cosine transformation and discrete wavelet transformation. Three levels of decomposition for eight wavelet transformations and five operators for edge detection were applied on these images. As an objective measure of the quality for edge detection, the parameters "performance ratio" and "F-measure" were used. The obtained results showed that edge detection operators behaved differently in images with a different number of details. Decomposition significantly degrades the image, but useful information can be extracted at the third level of decomposition, because the image with a different number of details behaves differently at each level. For an image with a certain number of details, decomposition Level 3 in some cases gives better results than Level 2. The obtained results can be applied to image compression with different complexity. By selecting a certain combination of operators and decomposition levels, a higher compression ratio with preserving a larger amount of useful image information can be achieved. Depending on the image resolution whereby the number of details varies, an operator optimization can be performed according to the decomposition level in order to obtain the best possible edge detection.

2024, Heman Joyekurrun

The JPEG image compression standard is the most famous standard in use today. Due to increasing demand for image transmission in communication, the research in the field of image compression has increased significantly. Image compression... more

The JPEG image compression standard is the most famous standard in use today. Due to increasing demand for image transmission in communication, the research in the field of image compression has increased significantly. Image compression is also very important for efficient transmission and storage. This project analyses the different lossy image compression using Discrete Cosine Transform and Discrete Wavelet Transform strategies developed for JPEG image transmission. Both image compression techniques have been evaluated using RGB and grayscale images so as to justify which techniques performs the best compression ratio by giving the better PSNR value. In the proposed technique the image was converted to JPEG and DCT image compression has been performed to reduce the consumption of expensive resources such as storage space and transmission bandwidth. DWT image compression was performed using Daubechies wavelet type to achieve the best image compression with better PSNR value of the reconstructed image. A gain of 4dB in PSNR was obtained with DWT over DCT image compression. Four image transmission techniques have been built and combined to form a complete communication system which includes; flattening image by data string, sending image with an Overlay text, sending by TCP/IP protocol, and via Email system. This proposed scheme therefore, provides an enhanced solution for the transmission of JPEG images and a gain of 4dB which implies a significant improvement in image fidelity. This novel technique can be applied in real life applications and communication systems.

2024, Asilomar Conference on Signals, Systems and Computers

The JPEG algorithm for compression of grayscale and color images was implemented in software and tested on nine IS0 color test images. Image-optimized Huffman codes were generated for each image and the performance of these codes were... more

The JPEG algorithm for compression of grayscale and color images was implemented in software and tested on nine IS0 color test images. Image-optimized Huffman codes were generated for each image and the performance of these codes were compared with the proposed default Huffman code as well as with a code optimized for the average of the nine images. Statistics were generated for the run lengths and bits of significance required to encode the quantized DC and AC components of the Discrete Cosine Transforms (DCTs) for the Luminance and Chrominance image arrays. Statistics were also formed to determine the fraction of the code used to locate the positions of the non-zero spectral samples as opposed to the values of those samples.

2024, IRJET

- Images are one of the most important visual representations used almost in every field. They require memory for their storage which necessitates a lot of space. Image compression plays a pivotal role in reducing the size of an image... more

- Images are one of the most important visual
representations used almost in every field. They require
memory for their storage which necessitates a lot of space.
Image compression plays a pivotal role in reducing the size of
an image so that more images can be stored and thereby
increasing the transmission speed. Many coding algorithms
are written to compress images and reduce redundancy. This
paper analyses and compares Huffman and Arithmetic coding
algorithms with reference to image compression.

2024

This paper presented two new efficient algorithms for contour approximation. The proposed algorithm is compared with Ramer (good quality), Triangle (faster) and Trapezoid (fastest) in this work; which are briefly described. Cartesian... more

This paper presented two new efficient algorithms for contour approximation. The proposed algorithm is compared with Ramer (good quality), Triangle (faster) and Trapezoid (fastest) in this work; which are briefly described. Cartesian co-ordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In the paper the main idea of the analyzed procedures for contour compression is performed. For comparison, the mean square error and signal-to-noise ratio criterions are used. Computational time of analyzed methods is estimated depending on a number of numerical operations. Experimental results are obtained both in terms of image quality, compression ratios, and speed. The main advantages of the analyzed algorithm is small numbers of the arithmetic operations compared to the existing algorithms.

2024

In this paper we have implemented a digital watermarking technique based on single level discrete wavelet transform (DWT). The technique a watermark bits embedded into the selected high-pass filter coefficients of a cover image by using... more

In this paper we have implemented a digital watermarking technique based on single level discrete wavelet transform (DWT). The technique a watermark bits embedded into the selected high-pass filter coefficients of a cover image by using zonal sampling methods. This scheme requires side information (high pass filter coefficients) in watermark recovery. Experimental results show that the watermark is robust to geometric attacks (Gaussian noise is used in this paper). The contours of the original image can be extracted easily by using of SSPCE (single step parallel contour extraction) method from blurred watermarked image. To compare the results, the mean square error, signal-to-noise ratio criterions, and compression ratio (or bit per pixel) were used. Experimental results for contour extraction provide the comparative results in between these algorithms in terms of normalized correlation (NC). The simplicity of the method with accepted level of the reconstruction is the main advantag...

2024

This paper presents a two algorithms of contour extraction from grey level image. The first proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction (SSPCE) method... more

This paper presents a two algorithms of contour extraction from grey level image. The first proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction (SSPCE) method is used for the binary image after inverse wavelet transform is applied to the details images. Then the contours are compressed using either ramer, or triangle methods in spatial domain. The second proposed algorithm is applied in spectral domain using discrete cosine transform (DCT). The algorithm of contour extraction and image compression using low-pass filter (LPF) and high-pass filter (HPF) is presented and compared with the traditional zonal sampling algorithm of low-pass and high-pass filters in this paper. Effectiveness of the contour extraction and compression for test image is evaluated. In the paper the main idea of the analyzed procedures for both contour extraction and image compression are performed. To compare the results, the mean s...

2024

This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction (SSPCE)... more

This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction (SSPCE) method is used for the binary image after inverse wavelet transform is applied to the details images. Then the contours are compressed using either Ramer or Trapezoid methods in spatial domain. Effectiveness of the contour extraction and compression for different classes of images is evaluated. In the paper the main idea of the analyzed procedure for both contour extraction and image compression are performed. To compare the results, the mean square error, signal-to-noise ratio criterions, and compression ratio (bit per pixel) were used. The simplicity to obtain compressed image and extracted contours with accepted level of the reconstruction is the main advantage of the proposed algorithm. Key-Words: Wavelet transform, Image compression, Contour extrac...

2024

Abstract: This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction... more

Abstract: This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using single-level wavelet transform (WT). Single step parallel contour extraction (SSPCE) method is used for the binary image after inverse wavelet transform is applied to the details images. Then the contours are compressed using either ramer, triangle, or trapezoid methods in spatial domain. Effectiveness of the contour extraction and compression for different classes of images is evaluated. In the paper the main idea of the analyzed procedure for both contour extraction and image compression are performed. To compare the results, the mean square error, signal-to-noise ratio criterions, and compression ratio (or bit per pixel) were used. The simplicity to obtain compressed image and extracted contours with accepted level of the reconstruction is the main advantage of the proposed algorithm.

2024

This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using singlelevel wavelet transform (WT). Single step parallel contour extraction (SSPCE)... more

This paper presents a method of contour extraction and compression from grey level image. The proposed algorithm is applied in spectral domain using singlelevel wavelet transform (WT). Single step parallel contour extraction (SSPCE) method is used for the binary image after inverse wavelet transform is applied to the details images. Then the contours are compressed using either ramer, tangent, triangle, or trapezoid methods in spatial domain. Effectiveness of the contour extraction and compression for different classes of images is evaluated. In the paper the main idea of the analyzed procedure for both contour extraction and image compression are performed. To compare the results, the mean square error, signal-to-noise ratio criterions, and compression ratio (or bit per pixel) were used. The simplicity to obtain compressed image and extracted contours with accepted level of the reconstruction is the main advantage of the proposed algorithm.

2024

A new approach for contour data compression is presented in the paper-Cartesian coordinates of an input contour are processed in such a way that final contour is represented by one-dimensional set of distances with constant, regular angle... more

A new approach for contour data compression is presented in the paper-Cartesian coordinates of an input contour are processed in such a way that final contour is represented by one-dimensional set of distances with constant, regular angle between them. The selection of vertices and the final algorithm for contour compression are described. Comparison of proposed method with the Ramer algorithm was also performed. For comparison the mean square error and SNR criteria were used. Finally, results of the experiments and advantages and drawbacks of proposed method are discussed.

2024

This paper presented and developed a new algorithm of the Trapezoid method for Arabic letters contour approximation. The proposed algorithm is compared with the existing methods such as Ramer (good quality), Tangent (faster) and Triangle... more

This paper presented and developed a new algorithm of the Trapezoid method for Arabic letters contour approximation. The proposed algorithm is compared with the existing methods such as Ramer (good quality), Tangent (faster) and Triangle (much faster) methods in this work; which are briefly described. Cartesian coordinates of an input contour are processed in such a manner that finally contours is presented by a set of selected vertices of the edge of the contour. In the paper the main idea of the analyzed procedure for contour compression is performed. For comparison, the mean square error and signal-to-noise ratio criterions are used. Computational time of analyzed method is estimated depending on a number of numerical operations. Experimental results are obtained both in terms of image quality, compression ratios, and speedily. The main advantages of the analyzed algorithm are simplicity and small numbers of the arithmetic operations compared to the existing algorithms.

2024, International Journal of Computer Science & Engineering Survey

Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and... more

Fingerprint analysis plays a crucial role in crucial legal matters such as investigation of crime and security system. Due to the large number and size of fingerprint images, data compression has to be applied to reduce the storage and communication bandwidth requirements of those images. To do this, there are many types of wavelet has been used for fingerprint image compression. In this paper we have used Coiflet-Type wavelets and our aim is to determine the most appropriate Coiflet-Type wavelet for better compression of a digitized fingerprint image and to achieve our goal Retain Energy (RE) and Number of Zeros (NZ) in percentage is determined for different Coiflet-Type wavelets at different threshold values at the fixed decomposition level 3 using wavelet and wavelet packet transform. We have used 8-bit grayscale left thumb digitized fingerprint image of size 480×400 as a test image.

2024, Journal of Advanced Research in Dynamical and Control Systems

Recent years have seen nice development among the sphere of telemedicine and radiology. Telemedicine is that the use of telecommunications mechanism and knowledge technology to supply period health care services to remote patients.... more

Recent years have seen nice development among the sphere of telemedicine and radiology. Telemedicine is that the use of telecommunications mechanism and knowledge technology to supply period health care services to remote patients. Telemedicine characterized by transmission of medical information like Electronic Patient Record (EPR) and pictures between users is one in all the rising fields in medication. The advantage of telemedicine is that the leveling of regional variations. In spite of the improvements in capacity of communication innovations, compression of radiological data remains troublesome. Investigate work proposes Radiological Image Compression Prototype (RICP) applied in telemedicine. The component takes advantage of Efficient XML Interchange (EXI) that conveys succeeding best approach through XML compression that permits a data degree increment through file record estimate decreases, and is sponsored by the conclusion of the expanding utilization of xml inside the IT world. The work adopted binary xml technology and improved results is also achieved. Work reflects that choice of methodology is application specific. The Experimental results obtained on a collection of radiological pictures demonstrate that the projected methodology provides competitive performance compared with typical compression ways in terms of Compression Ratio (CR), memory usage and machine time.

2024

Volume 2, Issue 4 July – August 2013 Page 166 Abstract: Health has been a topical issue for years as it affects development of societies in various ways. As a result , several countries world over signed to the Millenium Development Goal... more

Volume 2, Issue 4 July – August 2013 Page 166 Abstract: Health has been a topical issue for years as it affects development of societies in various ways. As a result , several countries world over signed to the Millenium Development Goal in which health delivery is also part. However, several diseases have affected families, communities and nations at large and although governments are making efforts to control them, they are still a challenge. One such disease is Diabetes. Several mechanisms have been put in place to help control as well as monitor Diabetes as it is affecting a lot of people these days. This research was conducted to assess the effectiveness of internet enabled mobile phones as a tool for self management as well as community support platform for those suffering from Diabetes. The research was carried out on twenty patients and Harare . Diabetic patients read their blood sugar levels using Glucometers at some intervals and fed these readings on their profiles on the...

2024

In recent years, the accelerated growth of digital media collections has established the need for the development of tools for the efficient access and retrieval of visual information .The paper presents innovative content based image... more

In recent years, the accelerated growth of digital media collections has established the need for the development of tools for the efficient access and retrieval of visual information .The paper presents innovative content based image retrieval (CBIR) techniques based on feature vectors as DC coefficients of transformed images using DCT and FFT. Here the feature vector size per image is greatly reduced by taking only DC coefficients of each R, G, and B component of transformed image. The proposed CBIR techniques are implemented on a database having 1000 images spread across 10 categories. For each proposed CBIR technique 50 queries (5 per category) are fired on the database and net average precision and recall are computed for all feature sets per transform. Finally FFT surpasses DCT transforms in performance with highest precision and recall values.

2024

Image compression is widely used in general life for many purposes. Image compression process reduces the required storage size of an image, to be stored in digital devices. Digital devices and computational resources have limited... more

Image compression is widely used in general life for many purposes. Image compression process reduces the required storage size of an image, to be stored in digital devices. Digital devices and computational resources have limited communication & storage capabilities. Hence there is a need of image compression for high quality digital images. For example, a single high quality image may require 10 to 100 million of bits for representation. If such type of images are in communication with low bandwidth network, then image compression becomes necessary before communication to cope up with this limitation. So image compression is an important aspect for a researcher that improves image compression process by introducing new techniques or methods in this field. This paper discusses an approach to embed n-MM method before entropy encoding. It uses repeated sequence of values before entropy encoding (RLE), to get high compression ratio.

2024, Proceeding of The First Libyan International Conference on Engineering Sciences & Applications 13-15 March 2023. FLICESA-LA-1315032023-ITE004

The R2 coefficients suggest that there are concerns with accuracy even though the methodology has been established to be a workable way for 3D compression. Additionally, a cursory examination can indicate that such a compression method is... more

The R2 coefficients suggest that there are concerns with accuracy even though the methodology has been established to be a workable way for 3D compression. Additionally, a cursory examination can indicate that such a compression method is suitable for some applications but insufficient for others, such as 3D face recognition. When determining whether the technique is appropriate, factors to take into account include the necessary polynomial degree, which depends on the data's features, as well as the fact that, as this example shows, at very high degrees, the data becomes unstable. As a result, iterative strategies will be taken into account in this study.

2024, International Journal of Research in Advent Technology

This paper presents image compression using various Transforms. Transforms like DCT, DST, Slant, Walsh and Haar are applied on colour image. Full transform, column transform and Row transforms are three methods used for images... more

This paper presents image compression using various Transforms. Transforms like DCT, DST, Slant, Walsh and Haar are applied on colour image. Full transform, column transform and Row transforms are three methods used for images compression. In each method Image is Transformed and reconstructed using Inverse Transform. Root mean Square (RMSE) and Peak Signal to noise ratio between Original and compressed image is Calculated. From Implementation of these Transforms it has been observed that DCT, DST, slant transforms are gives better result in terms of RMSE, PSNR and Visual Quality of image. Haar and Walsh Transform gives acceptable result with good Compression Ratio. Aim of compression using Transform is to compress Image without high losses and Achieve good visual Quality after Compression.

2024, Procedia Computer Science

The finger knuckle print (FKP) of a particular person is found to be unique and can serve as a biometric feature has been revealed recently by the researchers. In this research Finger Knuckle Print has been used as a biometric feature.... more

The finger knuckle print (FKP) of a particular person is found to be unique and can serve as a biometric feature has been revealed recently by the researchers. In this research Finger Knuckle Print has been used as a biometric feature. Hybrid Wavelet Type I and Hybrid Wavelet Type II were used for feature extraction from the images in order to process it further. The important role of hybrid wavelet transform is to combine the key features of two different orthogonal transforms so that the strengths of both the transform wavelets are used. In this research the different transforms like (Discrete Cosine Transform) DCT, Haar. Hartley, Walsh and Kekre are used in combination for generation of 20 different hybrid wavelets. These hybrid wavelets are applied on the database images to generate feature vector coefficients and they are then subjected to Intra Class testing And Inter Class Testing and their performance is evaluated and compared. Proposed system has given up to 80% of EER for TAR-TRR (PI) for hybrid wavelet formed using (Discrete Cosine Transform) DCT and Kekre transform for the multimodal multi-instance implementation.

2024, Procedia Computer Science

The finger knuckle print (FKP) of a particular person is found to be unique and can serve as a biometric feature has been revealed recently by the researchers. In this research Finger Knuckle Print has been used as a biometric feature.... more

The finger knuckle print (FKP) of a particular person is found to be unique and can serve as a biometric feature has been revealed recently by the researchers. In this research Finger Knuckle Print has been used as a biometric feature. Hybrid Wavelet Type I and Hybrid Wavelet Type II were used for feature extraction from the images in order to process it further. The important role of hybrid wavelet transform is to combine the key features of two different orthogonal transforms so that the strengths of both the transform wavelets are used. In this research the different transforms like (Discrete Cosine Transform) DCT, Haar. Hartley, Walsh and Kekre are used in combination for generation of 20 different hybrid wavelets. These hybrid wavelets are applied on the database images to generate feature vector coefficients and they are then subjected to Intra Class testing And Inter Class Testing and their performance is evaluated and compared. Proposed system has given up to 77% of EER for TAR-TRR (PI) for multi-algorithmic implementation of HWI+HWII.

2024, International journal of computer applications

The paper presents iris recognition technique based on the concept of Energy Compaction. Compaction is done by using the partial energies of transformed iris images. Various Self mutated hybrid wavelet transforms (SMHWT) namely... more

The paper presents iris recognition technique based on the concept of Energy Compaction. Compaction is done by using the partial energies of transformed iris images. Various Self mutated hybrid wavelet transforms (SMHWT) namely "Cosine-Haar", "Cosine-Hartley", and "Cosine-Slant" are used to generate feature vector of iris images. The important task is reducing the size of feature vector so that the performance of system can be increased. The reduction of size of feature vector is achieved by using the concept of partial energies. The size of feature vector reduces immensely for 99%, 98%, 97% and 96% of energy. The size of feature vector is extremely large while considering all the coefficients of transformed iris images for 100% of energy which leads more computation. System gives better performance when partial energies are considered. The proposed technique is tested on Palacky University Iris Database. Genuine Acceptance Rate (GAR) is used as a metric to test the performance of the proposed Iris recognition technique. The self mutated hybrid wavelet transform of "Cosine-Haar" gives preeminent GAR value as compared to other transforms considered. Results show the proposed technique gives better performance in terms of accuracy by considering partial energies as compared to 100% energy.

2024

In this paper, a novel compression approach for large Arabic textual images is proposed. Initially, input texts are segmented into patterns which represent sub-words. Then, patterns-matching procedure is used in order to find similar... more

In this paper, a novel compression approach for large Arabic textual images is proposed. Initially, input texts are segmented into patterns which represent sub-words. Then, patterns-matching procedure is used in order to find similar patterns within the image. Finally, to optimize the performances of this approach, an adaptive arithmetic coding is used to encode the resulting data streams. Experimental results show that the average compression ratio of our approach is better than other existing algorithms such as LZW and CCITT Group 4. Keywords— Arabic textual image; compression; pattern matching; neighborhood coding; code reduction.

2024, 2012 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)

Lossless image coding is a very important technique to efficiently and perfectly reducing image storage and having many application fields such as: medical images, seismic data, digital archives and digital documentations. In this paper,... more

Lossless image coding is a very important technique to efficiently and perfectly reducing image storage and having many application fields such as: medical images, seismic data, digital archives and digital documentations. In this paper, we present a new approach of a lossless binary image coding which combines two coding process. The first one is based a neighbourhood code reduction which transforms binary image into a proper mapping representation. The second one uses arithmetic coding which produces bitstream with a good compression ratio. The performance of the algorithm is tested on different binary images that contain Arabic letters and compared to some algorithms such as LZW and CCITT Group 4.

2023, Journal of Al-Qadisiyah for Computer Science and Mathematics

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods... more

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of si...

2023, ArXiv

Recent advances in capsule endoscopy systems have introduced new methods and capabilities. The capsule endoscopy system, by observing the entire digestive tract, has significantly improved diagnosing gastrointestinal disorders and... more

Recent advances in capsule endoscopy systems have introduced new methods and capabilities. The capsule endoscopy system, by observing the entire digestive tract, has significantly improved diagnosing gastrointestinal disorders and diseases. The system has challenges such as the need to enhance the quality of the transmitted images, low frame rates of transmission, and battery lifetime that need to be addressed. One of the important parts of a capsule endoscopy system is the image compression unit. Better compression of images increases the frame rate and hence improves the diagnosis process. In this paper a high precision compression algorithm with high compression ratio is proposed. In this algorithm we use the similarity between frames to compress the data more efficiently.

2023, Journal of Al-Qadisiyah for Computer Science and Mathematics

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods... more

Iris research is focused on developing techniques for identifying and locating relevant biometric features, accurate segmentation and efficient computation while lending themselves to compression methods. Most iris segmentation methods are based on complex modelling of traits and characteristics which, in turn, reduce the effectiveness of the system being used as a real time system. This paper introduces a novel parameterized technique for iris segmentation. The method is based on a number of steps starting from converting grayscale eye image to a bit plane representation, selection of the most significant bit planes followed by a parameterization of the iris location resulting in an accurate segmentation of the iris from the original image. A lossless Hexadata encoding method is then applied to the data, which is based on reducing each set of six data items to a single encoded value. The tested results achieved acceptable saving bytes performance for the 21 iris square images of si...

2023, International journal of engineering research and technology

The main aim of this project is reducing noise in a hologram. Generally noise in hologram is very difficult to remove because an interference pattern is recorded on a digital camera during the digital processing. It is also present in... more

The main aim of this project is reducing noise in a hologram. Generally noise in hologram is very difficult to remove because an interference pattern is recorded on a digital camera during the digital processing. It is also present in reconstruction process which is affected by discrete quantizing levels. So we develop an algorithm using contourlet transform by changing the pixel values at different scales according to the requirements of each scale in the decomposition of the hologram.

2023, Lecture notes in computational vision and biomechanics

In this paper, a lossless image compression technique using prediction errors is proposed. To achieve better compression performance, a novel classifier which makes use of wavelet and Fourier descriptor features is employed. Artificial... more

In this paper, a lossless image compression technique using prediction errors is proposed. To achieve better compression performance, a novel classifier which makes use of wavelet and Fourier descriptor features is employed. Artificial neural network (ANN) is used as a predictor. An optimum ANN configuration is determined for each class of the images. In the second stage, an entropy encoding is performed on the prediction errors which improve the compression performance further. The prediction process is made lossless by making the predicted values as integers both at the compression and decompression stages. The proposed method is tested using three types of datasets, namely CLEF med 2009, COREL1 k and standard benchmarking images. It is found that the proposed method yields good compression ratio values in all these cases and for standard images, the compression ratio values achieved are higher compared to those obtained by the known algorithms.