Face Verification Research Papers - Academia.edu (original) (raw)

An overview of selected topics in face recognition is first presented in this chapter. The BioSecure 2D-face Benchmarking Framework is also described, com-posed of open-source software, publicly available databases and protocols. Three... more

An overview of selected topics in face recognition is first presented in this chapter. The BioSecure 2D-face Benchmarking Framework is also described, com-posed of open-source software, publicly available databases and protocols. Three methods for 2D-face ...

Face Recognition plays a major role in Biometrics. Feature selection is a measure issue in face recognition. This paper proposes a survey on face recognition. There are many methods to extract face features. In some advanced methods it... more

Face Recognition plays a major role in Biometrics. Feature selection is a measure issue in face recognition. This paper proposes a survey on face recognition. There are many methods to extract face features. In some advanced methods it can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, but still retaining facial information efficiently. The methods which are used to extract features are robust to low-resolution images. The method is a trainable system for selecting face features. After the feature selection procedure next procedure is matching for face recognition. The recognition accuracy is increased by advanced methods.

We propose a solution to handle this problem. We are extending upon the paper where Contextual Generative Adversarial Networks are used to generate a band of aged images from one single image. The images of different age groups will be... more

We propose a solution to handle this problem. We are extending upon the paper where Contextual Generative Adversarial Networks are used to generate a band of aged images from one single image. The images of different age groups will be generated from one single picture but with the same context. We are using the resulting images for face validation as a photo verification system. Our algorithm will use the band of generated images of different age groups and will match it with the current face of the person. If the match score goes beyond a limit, the photo identity will be validated, or else a warning will be triggered. This system will play an important role in places where there’s a need for higher security and better ways of validation.

Recently, visual surveillance systems has gained huge attraction from research community due to their significant impact on monitoring application. Several techniques have been developed which are based on the still image, which do not... more

Recently, visual surveillance systems has gained huge attraction from research community due to their significant impact on monitoring application. Several techniques have been developed which are based on the still image, which do not provide efficient solution for real-time application. Hence, video based face recognition is considered a tedious task. Recently, deep learning based schemes have been adopted widely for video face recognition but these techniques suffer from well-known challenges such as pose and illumination variation. Hence, we present a Convolutional Neural network based approach for video face recognition. In this work, we have introduced CNN based scheme which uses feature extraction and feature embedding modules along with GoogleNet architecture to improve the learning of CNN. We have incorporated histogram equalization-based image enhancement technique to improve the quality of video frames. The proposed approach is implemented using Python 3.7. The experimental analysis shows that proposed approach achieves the accuracy as 98.55% and AUC as 99.10% for open source datasets whereas for real-time scenarios without occlusion it achieves accuracy as 99.12%, for occlusion scenario it achieves 98.87% classification accuracy.

We propose a subspace learning algorithm for face recognition by directly optimizing recognition performance scores. Our approach is motivated by the following observations: 1) Different face recognition tasks (i.e., face identification... more

We propose a subspace learning algorithm for face recognition by directly optimizing recognition performance scores. Our approach is motivated by the following observations: 1) Different face recognition tasks (i.e., face identification and verification) have different performance metrics, which implies that there exist distinguished subspaces that optimize these scores, respectively. Most prior work focused on optimizing various discriminative or locality criteria and neglect such distinctions. 2) As the gallery (target) and the probe (query) data are collected in different settings in many real-world applications, there could exist consistent appearance incoherences between the gallery and the probe data for the same subject. Knowledge regarding these incoherences could be used to guide the algorithm design, resulting in performance gain. Prior efforts have not focused on these facts. In this paper, we rigorously formulate performance scores for both the face identification and the face verification tasks, provide a theoretical analysis on how the optimal subspaces for the two tasks are related, and derive gradient descent algorithms for optimizing these subspaces. Our extensive experiments on a number of public databases and a real-world face database demonstrate that our algorithm can improve the performance of given subspace based face recognition algorithms targeted at a specific face recognition task.

—The Kinship Face in the Wild data sets, recently published in TPAMI, are currently used as a benchmark for the evaluation of kinship verification algorithms. We recommend that these data sets are no longer used in kinship verification... more

—The Kinship Face in the Wild data sets, recently published in TPAMI, are currently used as a benchmark for the evaluation of kinship verification algorithms. We recommend that these data sets are no longer used in kinship verification research unless there is a compelling reason that takes into account the nature of the images. We note that most of the image kinship pairs are cropped from the same photographs. Exploiting this cropping information, competitive but biased performance can be obtained using a simple scoring approach, taking only into account the nature of the image pairs rather than any features about kin information. To illustrate our motives, we provide classification results utilizing a simple scoring method based on the image similarity of both images of a kinship pair. Using simply the distance of the chrominance averages of the images in the Lab color space without any training or using any specific kin features, we achieve performance comparable to state-of-the-art methods. We provide the source code to prove the validity of our claims and ensure the repeatability of our experiments.

In this paper, we present a convolutional neural network (CNN) approach for the face verification task. We propose a " Siamese " architecture of two CNNs, with each CNN reduced to only four layers by fusing convolutional and subsampling... more

In this paper, we present a convolutional neural network (CNN) approach for the face verification task. We propose a " Siamese " architecture of two CNNs, with each CNN reduced to only four layers by fusing convolutional and subsampling layers. Network training is performed using the stochastic gradient descent algorithm with annealed global learning rate. Generalization ability of network is investigated via unique pairing of face images, and testing is done on AT&T face database. Experimental work shows that the proposed CNN system can classify a pair of 46×46 pixel face images in 0.6 milliseconds, which is significantly faster compared to equivalent network architecture with cascade of convolutional and subsampling layers. The verification accuracy achieved is 3.33% EER (equal error rate). Learning converges within 20 epochs, and the proposed technique can verify a test subject unseen in training. This work shows the viability of the " Siamese " CNN for face verification applications, and further improvements to the architecture are under construction to enhance its performance.

Recently, visual surveillance systems has gained huge attraction from research community due to their significant impact on monitoring application. Several techniques have been developed which are based on the still image, which do not... more

Recently, visual surveillance systems has gained huge attraction from research community due to their significant impact on monitoring application. Several techniques have been developed which are based on the still image, which do not provide efficient solution for real-time application. Hence, video based face recognition is considered a tedious task. Recently, deep learning based schemes have been adopted widely for video face recognition but these techniques suffer from well-known challenges such as pose and illumination variation. Hence, we present a Convolutional Neural network based approach for video face recognition. In this work, we have introduced CNN based scheme which uses feature extraction and feature embedding modules along with GoogleNet architecture to improve the learning of CNN. We have incorporated histogram equalization-based image enhancement technique to improve the quality of video frames. The proposed approach is implemented using Python 3.7. The experimental analysis shows that proposed approach achieves the accuracy as 98.55% and AUC as 99.10% for open source datasets whereas for real-time scenarios without occlusion it achieves accuracy as 99.12%, for occlusion scenario it achieves 98.87% classification accuracy.

ABSTRACT A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM)... more

ABSTRACT A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a... more

The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix (Overton, 1988; Lewis and Overton, 1996). Moreover, we formulate LMNN (Weinberger et al., 2005), one of the state-of-the-art metric learning methods, as a similar eigenvalue optimization problem. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms are developed for DML-eig and LMNN which only need the computation of the largest eigenvector of a matrix per iteration. Their convergence characteristics are rigorously established. Various experiments on benchmark data sets show the competitive performance of our new ...

One of the major challenges of face recognition is to design a feature extractor and matcher that reduces the intraclass variations and increases the inter-class variations. The feature extraction algorithm has to be robust enough to... more

One of the major challenges of face recognition is to design a feature extractor and matcher that reduces the intraclass variations and increases the inter-class variations. The feature extraction algorithm has to be robust enough to extract similar features for a particular subject despite variations in quality, pose, illumination, expression, aging, and disguise. The problem is exacerbated when there are two individuals with lower inter-class variations, i.e., look-alikes. In such cases, the intra-class similarity is higher than the inter-class variation for these two individuals. This research explores the problem of look-alike faces and their effect on human performance and automatic face recognition algorithms. There is three fold contribution in this research: firstly, we analyze the human recognition capabilities for look-alike appearances. Secondly, we compare human recognition performance with ten existing face recognition algorithms, and finally, proposed an algorithm to i...

Introduction This paper presents fusion decision technique comparisons based on nearest- neighborhood (NN) classifiers family for a bimodal biometric verification system that makes use of face images and speech utterances. The system is... more

Introduction This paper presents fusion decision technique comparisons based on nearest- neighborhood (NN) classifiers family for a bimodal biometric verification system that makes use of face images and speech utterances. The system is essentially constructed by a face expert, a speech expert and a fusion decision module. Each individual expert has been optimized to operate in automatic mode and designed

Face Recognition as a biometric tool for identification and verification of persons has gained momentum and practical vitality in the wake of increased and growing security concerns. A facial recognition and face verification system can... more

Face Recognition as a biometric tool for identification and verification of persons has gained momentum and practical vitality in the wake of increased and growing security concerns. A facial recognition and face verification system can be considered a computer application for automatically identifying or verifying a person in a digital image in as much as the processing is carried out

Face verification is different from face identification task. Some traditional subspace methods that work well in face identification may suffer from severe over-fitting problem when applied for the verification task. Conventional... more

Face verification is different from face identification task. Some traditional subspace methods that work well in face identification may suffer from severe over-fitting problem when applied for the verification task. Conventional discriminative methods such as linear discriminant analysis (LDA) and its variants are highly sensitive to the training data, which hinders them from achieving high verification accuracy. This work proposes an eigenspectrum model that alleviates the over-fitting problems by replacing the unreliable small and zero eigenvalues with the model values. It also enables the discriminant evaluation in the whole space to extract the low dimensional features effectively. The proposed approach is evaluated and compared with 8 popular subspace based methods for a face verification task. Experimental results on three face databases show that the proposed method consistently outperforms others.

Ángel Serrano, Cristina Conde, Isaac Martín de Diego, Enrique Cabello Face Recognition & Artificial Vision Group, Universidad Rey Juan Carlos, C/ Tulipán, s/n, Móstoles (Madrid), E-28933, Spain, http://frav.escet.urjc.es/... more

Convolutional neural networks have reached extremely high performances on the Face Recognition task. These models are commonly trained by using high-resolution images and for this reason, their discrimination ability is usually degraded... more

Convolutional neural networks have reached extremely high performances on the Face Recognition task. These models are commonly trained by using high-resolution images and for this reason, their discrimination ability is usually degraded when they are tested against low-resolution images. Thus, Low-Resolution Face Recognition remains an open challenge for deep learning models. Such a scenario is of particular interest for surveillance systems in which it usually happens that a low-resolution probe has to be matched with higher resolution galleries. This task can be especially hard to accomplish since the probe can have resolutions as low as 8, 16 and 24 pixels per side while the typical input of state-of-the-art neural network is 224. In this paper, we described the training campaign we used to fine-tune a ResNet-50 architecture, with Squeeze-and-Excitation blocks, on the tasks of very low and mixed resolutions face recognition. For the training process we used the VGGFace2 dataset and then we tested the performance of the final model on the IJB-B dataset; in particular, we tested the neural network on the 1:1 verification task. In our experiments we considered two different scenarios: 1) probe and gallery with same resolution; 2) probe and gallery with mixed resolutions. Experimental results show that with our approach it is possible to improve upon state-of-the-art models performance on the low and mixed resolution face recognition tasks with a negligible loss at very high resolutions.

We present a novel local-based face verification system whose components are analogous to those of biological systems. In the proposed system, after global registration and normalization, three eye regions are converted from the spatial... more

We present a novel local-based face verification system whose components are analogous to those of biological systems. In the proposed system, after global registration and normalization, three eye regions are converted from the spatial to polar frequency domain by a Fourier-Bessel Transform. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images. In this dissimilarity space a Pseudo-Fisher discriminator is built. ROC and equal error rate verification test results on the FERET database showed that the system performed at least as state-of-the-art methods and better than a system based on polar Fourier features. The local-based system is especially robust to facial expression and age variations, but sensitive to registration errors.

In this paper a fully automatic face verification system is presented. A face is characterized by a vector (jet) of coefficients determined applying a bank of Gabor filters in correspondence to 19 facial fiducial points automatically... more

In this paper a fully automatic face verification system is presented. A face is characterized by a vector (jet) of coefficients determined applying a bank of Gabor filters in correspondence to 19 facial fiducial points automatically localized. The identity claimed by a subject is accepted or rejected depending on a similarity measure computed among the jet characterizing the subject, and the ones corresponding to the subjects in the gallery. The performance of the system has been quantified according to the Lausanne evaluation protocol for authentication.

In this paper, we propose a pseudo 2 Dimension Discrete HMM (P2D-DHMM) for face verification. Each face image is scanned for frontal face in two ways. One way from top to bottom and one way from right to left by a sliding window and two... more

In this paper, we propose a pseudo 2 Dimension Discrete HMM (P2D-DHMM) for face verification. Each face image is scanned for frontal face in two ways. One way from top to bottom and one way from right to left by a sliding window and two set features are extracted. 2D-DCT coefficients as features are extracted. K-means clustering is used for generation two codebook and then by the vector quantization (VQ) two code words for each face image are generated. These code words are used as observation vectors in training and recognition phase. Two separate Discrete HMM (each HMM for each way) is trained by Baum Welch algorithm for each set of containing image of the same face (λvc, λhc). A test face image is recognized by finding the best match (likelihood) between the image and all of the HMMs (λvc + λhc) face models using forward algorithm. Experimental results show the advantages of using P2D-DHMM recognizer engine instead of conventional continues HMM.