Application of Image Fusion for Enhancing the Quality of an Image (original) (raw)

A Study on Eigen Faces for Removing Image Blurredness Through Image Fusion

The International journal of Multimedia & Its Applications, 2012

Advances in technology have brought about extensive research in the field of image fusion. Image fusion is one of the most researched challenges of Face Recognition. Face Recognition (FR) is the process by which the brain and mind understand, interpret and identify or verify human faces Face recognition is nothing but a biometric application by which we can automatically identify and recognize a person from the visual image of that person stored in the database. Image fusion is the perfect combination of relevant information from two or more images into a single fused image. As a result the final output image will carry more information as compare to the input images. Thus the main aim of an image fusion algorithm is to take redundant and complementary information from the source images and to generate an output image with better visual quality. In this paper we have proposed a novel approach of pixel level image fusion using PCA that will remove the image blurredness in two images and reconstruct a new de-blurred fused image. The proposed approach is based on the calculation of Eigen faces with Principal Component Analysis (PCA). Principal Component Analysis (PCA) has been most widely used method for dimensionality reduction and feature extraction.

Optimizing Image Fusion Using Modified Principal Component Analysis Algorithm and Adaptive Weighting Scheme

Eswar Publications, 2023

Image fusion is an important technique for combining two or more images to produce a single, high-quality image. Principal component analysis (PCA) is a commonly used method for image fusion. However, existing PCA-based image fusion algorithms have some limitations, such as sensitivity to noise and poor fusion quality. In this paper, we propose a modified PCA algorithm for image fusion that uses an adaptive weighting scheme to improve the fusion quality. The proposed algorithm optimizes the fusion process by selecting the principal components that contain the most useful information and weighing them appropriately. Experimental results show that the proposed algorithm outperforms existing PCA-based image fusion algorithms in terms of fusion quality, sharpness, and contrast.

Performance evaluation on image fusion techniques for face recognition

International Journal of Computational Vision and Robotics, 2018

In face recognition, a feature vector usually represents the salient characteristics that best describe a face image. However, these characteristics vary quite substantially while looking into a face image from different directions. Therefore, by accumulating these directional features into a single feature vector will certainly lead to superior performance. This paper addresses this issue by means of image fusion and presents a comprehensive performance analysis of different image fusion techniques for face recognition. Image fusion is done between the original captured image and its true/partial diagonal images. The fusion is made by three different ways by placing the images: 1) one-over-other (superimposed); 2) side-by-side (horizontally); 3) up-and-down (vertically). The empirical results on publicly available AT&T, UMIST and FERET face databases collectively demonstrate that superimposed image between the original and its true diagonal images actually provides superior discriminant features for face recognition as compared to either original or its diagonal image.

An Approach for Image Fusion using PCA and Genetic Algorithm

International Journal of Computer Applications, 2016

The pattern of mixing multiple images so as to get a single, well developed image is well established. Various fusion methods have been advanced in literature. The current paper is based on image Fusion using PCA and Genetic Algorithm. The pictures of equal size are considered for experimentation. In order to overcome the problems of conventional techniques Genetic Algorithm can be used in collaboration with the technique of PCA (Principal Component Analysis). In Image Fusion, Genetic Algorithm can be signed when optimization of parameter is required. Also for the optimization of the weight values, Genetic algorithm is used. The various parameters used to measure the ability of image fusion technique are Mean Square Error, Entropy, Mean, Bit Error Rate, Mean, Peak Signal to Noise Ratio. From the above experiment we find that this method works well and the quality of the output image is far better than previous methods.

Image fusion based on principal component analysis and high-pass filter

2009 International Conference on Computer Engineering & Systems, 2009

Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution panchromatic image and low spatial resolution multispectral image. Image fusion techniques are therefore useful for integrating a high spectral resolution image with a high spatial resolution image, to produce a fused image with high spectral and spatial resolutions. Some image fusion methods such as IHS, PC and BT provide superior visual high-resolution multi-spectral images but ignore the requirement of high-quality synthesis of spectral information. The high-quality synthesis of spectral information is very important for most remote sensing application based on spectral signatures, such as lithology, soil and vegetation analysis. Another family of image fusion techniques such as HPF operates on the basis of the injection of high-frequency components from the high spatial resolution panchromatic image into the multi-spectral image. This family of methods provides less spectral distortion. In this paper we propose to integrate between the two families PCA and HPF to provide pan sharpened image with superior spatial resolution and less spectral distortion. The experiments have shown that the proposed fusion method retains the spectral characteristics of the multi-spectral image and improves at the same time the spatial resolution of the fused image.

IJERT-An Integrated Approach for Image Fusion using PCA and DCT

International Journal of Engineering Research and Technology (IJERT), 2013

https://www.ijert.org/an-integrated-approach-for-image-fusion-using-pca-and-dct https://www.ijert.org/research/an-integrated-approach-for-image-fusion-using-pca-and-dct-IJERTV2IS110280.pdf Image fusion amalgamates the information from several images of one scene to obtain an enlightening image which is more appropriate for human visual perception or additional vision processing. Image quality is a closely related to image focus. In some images it is not possible to obtain a clear focus in all regions simultaneously, so image fusion is used to combine pictures with different focus into one with all the best-focused regions. The objective of this paper is to propose a new integrated technique for image fusion. The proposed approach combines PCA and DCT for image fusion.

Theoretical Study of Image Fusion Techniques: A Review

—the goal of image fusion is to combine relevant information from two or more images of the same view in to single image. The result of image fusion is a new fused image which is more suitable for human being and machine discernment for further image-processing tasks like segmentation, feature taking out and objects recognition. In this paper the image fusion techniques described using the PCA, and wavelet family. Principal component analysis (PCA) is a well-known scheme for feature extraction and dimension reduction. In DCT low frequency region of the image has large DCT coefficient. S o it has very good energy compactness properties. In DWT image are di viding in low sub ban ds and high sub bands are fused using various fusion methods. Finally, the output of the fused image is obtained by applying inverse wavelet transform on the fused coefficients of low sub bands and high sub bands. Where in curvelet it given smooth cured edge detection. Above technique mainly done in two domain: spatial domain and transform domain where it performed fusion at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. This is depends on the required application.

Image pixel fusion for human face recognition

2010

In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process.

Next Level of Data Fusion for Human Face Recognition

2011

This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.

Image Fusion using Hybrid Technique (PCA + SWT)

Image fusion is to reduce uncertainty and minimize redundancy. It is a process of combining the relevant information from a set of images, into a single image, wherein the resultant fused image will be more informative and complete than any of the input images. Till date the image fusion techniques were like DWT or pixel based. These conventional techniques were not that efficient and they did not produced the expected results as the edge preservance, spatial resolution and the shift invariance are the factors that could not be avoided during image fusion. This paper discusses the implementation of two categories of image fusion. The Stationary wavelet transform (SWT), and Principal component analysis (PCA). The Stationary wavelet transform (SWT) is a wavelet transform algorithm designed to overcome the lack of translation-invariance of the discrete wavelet transform (DWT). Whereas The Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. To overcome the disadvantages of the earlier techniques used for image fusion a new hybrid technique is proposed that works by combining the SWT and PCA i.e. stationery wavelet transform and principal component analysis. This hybrid technique is proposed to obtain a better efficient and a better quality fused image which will have preserved edges and its spatial resolution and shift invariance will be improved. This hybrid technique will produce better fusion results. The image obtained after fusion using proposed technique will be of better quality than the images fused using conventional techniques.