A Study on Eigen Faces for Removing Image Blurredness Through Image Fusion (original) (raw)

Application of Image Fusion for Enhancing the Quality of an Image

Computer Science & Information Technology (CS & IT), 2012

Advances in technology have brought about extensive research in the field of image fusion. Image fusion is one of the most researched challenges of Face Recognition. Face Recognition (FR) is the process by which the brain and mind understand, interpret and identify or verify human faces.. Image fusion is the combination of two or more source images which vary in resolution, instrument modality, or image capture technique into a single composite representation. Thus, the source images are complementary in many ways, with no one input image being an adequate data representation of the scene. Therefore, the goal of an image fusion algorithm is to integrate the redundant and complementary information obtained from the source images in order to form a new image which provides a better description of the scene for human or machine perception. In this paper we have proposed a novel approach of pixel level image fusion using PCA that will remove the image blurredness in two images and reconstruct a new de-blurred fused image. The proposed approach is based on the calculation of Eigen faces with Principal Component Analysis (PCA). Principal Component Analysis (PCA) has been most widely used method for dimensionality reduction and feature extraction.

Performance evaluation on image fusion techniques for face recognition

International Journal of Computational Vision and Robotics, 2018

In face recognition, a feature vector usually represents the salient characteristics that best describe a face image. However, these characteristics vary quite substantially while looking into a face image from different directions. Therefore, by accumulating these directional features into a single feature vector will certainly lead to superior performance. This paper addresses this issue by means of image fusion and presents a comprehensive performance analysis of different image fusion techniques for face recognition. Image fusion is done between the original captured image and its true/partial diagonal images. The fusion is made by three different ways by placing the images: 1) one-over-other (superimposed); 2) side-by-side (horizontally); 3) up-and-down (vertically). The empirical results on publicly available AT&T, UMIST and FERET face databases collectively demonstrate that superimposed image between the original and its true diagonal images actually provides superior discriminant features for face recognition as compared to either original or its diagonal image.

Optimizing Image Fusion Using Modified Principal Component Analysis Algorithm and Adaptive Weighting Scheme

Eswar Publications, 2023

Image fusion is an important technique for combining two or more images to produce a single, high-quality image. Principal component analysis (PCA) is a commonly used method for image fusion. However, existing PCA-based image fusion algorithms have some limitations, such as sensitivity to noise and poor fusion quality. In this paper, we propose a modified PCA algorithm for image fusion that uses an adaptive weighting scheme to improve the fusion quality. The proposed algorithm optimizes the fusion process by selecting the principal components that contain the most useful information and weighing them appropriately. Experimental results show that the proposed algorithm outperforms existing PCA-based image fusion algorithms in terms of fusion quality, sharpness, and contrast.

Image pixel fusion for human face recognition

2010

In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process.

An image fusion technique for efficient face recognition

2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS), 2015

An image level fusion technique combines different form of images so that the combine image may contain more relevant information than the individual ones. An image fusion technique for face recognition is presented in this paper. Due to the topological structure of human face images, one can easily identify a face image by looking along the horizontal-vertical and also forward-backward directions. This indicates that there is enough relevant information available along those directions. To realize this, we generate the diagonal face image from the original face image and then combine them to get the fused image. Finally, we extract discriminant features from the fused images, which integrate the underlying discriminant information along those directions. This extracted feature vectors are applied on Radial Basis Function Neural Networks (RBF-NN) for classification and recognition. Experiments on the AT&T and UMIST face databases show that the proposed method can obtain a higher accuracy than some of the subspace-based methods.

Fusion of PCA and LDA Based Face Recognition System

Face recognition is an important biometric because of its potential applications in many fields, such as access control, surveillance, and human-computer interface. In this paper, a system that fuses the output of a PCA and LDA based system is presented. The fusion is based on a set of rules. Both PCA and LDA based systems were trained using the same training data and tuned to give the highest equal error rate. It was found out that the fusion of both systems increases the recognition rate by 8% and 3% from the performance of the PCA and LDA systems respectively.

An Approach for Image Fusion using PCA and Genetic Algorithm

International Journal of Computer Applications, 2016

The pattern of mixing multiple images so as to get a single, well developed image is well established. Various fusion methods have been advanced in literature. The current paper is based on image Fusion using PCA and Genetic Algorithm. The pictures of equal size are considered for experimentation. In order to overcome the problems of conventional techniques Genetic Algorithm can be used in collaboration with the technique of PCA (Principal Component Analysis). In Image Fusion, Genetic Algorithm can be signed when optimization of parameter is required. Also for the optimization of the weight values, Genetic algorithm is used. The various parameters used to measure the ability of image fusion technique are Mean Square Error, Entropy, Mean, Bit Error Rate, Mean, Peak Signal to Noise Ratio. From the above experiment we find that this method works well and the quality of the output image is far better than previous methods.

A REVIEW ON FACE IDENTIFICATIONS SYSTEM USING FUSION METHOD

Biometrics is used in the process of authentication of a person by verifying or identifying that a user requesting a network resource is who he, she, or it claims to be, and vice versa. It uses the property that a human trait associated with a person itself like structure of finger, face details etc. By comparing the existing data with the incoming data we can verify the identity of a particular person. There are many types of biometric system like fingerprint recognition, face detection and recognition, iris recognition etc., these traits are used for human identification in surveillance system, criminal identification. Face is a complex multidimensional structure and needs good computing techniques for recognition. The face is our primary and first focus of attention in social life playing an important role in identity of individual. In this paper, we are proposing the fusion techniques for face identification system and compare it with Gaussian model and 2D Histogram. Also analyze the performance of the system using various performance parameters such as recognition rate, accuracy, true positive rate, and false negative rate.

Next Level of Data Fusion for Human Face Recognition

2011

This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database has been used. Experimental results show that the performance of multiple classifier system along with decision fusion works well over the single classifier system.

IRJET- A review on Improved Face Recognition using Data Fusion

IRJET, 2021

The basic concept of this paper is to review on how to use data fusion technique to enhance the performance of the face recognition system. Here we have to study for improvement different fusion approaches. The different fusion approaches consists of like, Feature fusion approach and Decision fusion approach, different data fusion approaches. Feature fusion, here we take over view on the three feature vectors generated using principal component analysis (PCA), discrete cosine transform (DCT) and local binary patterns algorithms (LBP). The feature vector from each extraction technique is then applied to similarity measure classifier. In the decision fusion approach, feature vectors are generated from the three algorithms, which fed to classifiers separately and decisions are combined using majority voting scheme. The proposed strategy was tried utilizing face pictures having diverse outward appearances and conditions got from ORL and FRAV2D information bases as well as camera captured images also.