Efficient Face Verification Under Makeup Using Few Salient Facial Regions (original) (raw)

FM2u-Net: Face Morphological Multi-Branch Network for Makeup-Invariant Face Verification

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

It is challenging in learning a makeup-invariant face verification model, due to (1) insufficient makeup/non-makeup face training pairs, (2) the lack of diverse makeup faces, and (3) the significant appearance changes caused by cosmetics. To address these challenges, we propose a unified Face Morphological Multi-branch Network (FM 2 u-Net) for makeup-invariant face verification, which can simultaneously synthesize many diverse makeup faces through face morphology network (FM-Net) and effectively learn cosmetics-robust face representations using attention-based multi-branch learning network (AttM-Net). For challenges (1) and (2), FM-Net (two stacked auto-encoders) can synthesize realistic makeup face images by transferring specific regions of cosmetics via cycle consistent loss. For challenge (3), AttM-Net, consisting of one global and three local (task-driven on two eyes and mouth) branches, can effectively capture the complementary holistic and detailed information. Unlike DeepID2 which uses simple concatenation fusion, we introduce a heuristic method AttM-FM, attached to AttM-Net, to adaptively weight the features of different branches guided by the holistic information. We conduct extensive experiments on makeup face verification benchmarks (M-501, M-203, and FAM) and general face recognition datasets (LFW and IJB-A). Our framework FM 2 u-Net achieves state-of-the-art performances.

Effective Face Verification Systems Based on the Histogram of Oriented Gradients and Deep Learning Techniques

2019 14th International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP), 2019

In this paper, we proposed a face verification method. We experiment with a histogram of oriented gradients description combined with the linear support vector machine (HOG+SVM) as for the face detection. Subsequently, we applied a deep learning method called ResNet-50 architecture in face verification. We evaluate the performance of the face verification system on three well-known face datasets (BioID, FERET, and ColorFERET). The experimental results are divided into two parts; face detection and face verification. First, the result shows that the HOG+SVM performs very well on the face detection part and without errors being detected. Second, The ResNet-50 and FaceNet architectures perform best and obtain 100% accuracy on the BioID and FERET dataset. They also, achieved very high accuracy on ColorFERET dataset.

A Comparison of Face Verification with Facial Landmarks and Deep Features

2018

Face verification is a key task in many application fields, such as security and surveillance. Several approaches and methodologies are currently used to try to determine if two faces belong to the same person. Among these, facial landmarks are very important in forensics, since the distance between some characteristic points of a face can be used as an objective measure in court during trials. However, the accuracy of the approaches based on facial landmarks in verifying whether a face belongs to a given person or not is often not quite good. Recently, deep learning approaches have been proposed to address the face verification problem, with very good results. In this paper, we compare the accuracy of facial landmarks and deep learning approaches in performing the face verification task. Our experiments, conducted on a real case scenario, show that the deep learning approach greatly outperforms in accuracy the facial landmarks approach. Keywords–Face Verification; Facial Landmarks;...

Learning face similarities for face verification using hybrid convolutional neural networks

Indonesian Journal of Electrical Engineering and Computer Science

Face verification focuses on the task of determining whether two face images belong to the same identity or not. For unrestricted faces in the wild, this is a very challenging task. Besides significant degradation due to images that have large variations in pose, illumination, expression, aging, and occlusions, it also suffers from large-scale ever-expanding data needed to perform one-to-many recognition task. In this paper, we propose a face verification method by learning face similarities using a Convolutional Neural Networks (ConvNet). Instead of extracting features from each face image separately, our ConvNet model jointly extracts relational visual features from two face images in comparison. We train four hybrid ConvNet models to learn how to distinguish similarities between the face pair of four different face portions and join them at top-layer classifier level. We use binary-class classifier at top-layer level to identify the similarity of face pairs which includes a conve...

A Convolutional Neural Network Approach for Face Verification

In this paper, we present a convolutional neural network (CNN) approach for the face verification task. We propose a " Siamese " architecture of two CNNs, with each CNN reduced to only four layers by fusing convolutional and subsampling layers. Network training is performed using the stochastic gradient descent algorithm with annealed global learning rate. Generalization ability of network is investigated via unique pairing of face images, and testing is done on AT&T face database. Experimental work shows that the proposed CNN system can classify a pair of 46×46 pixel face images in 0.6 milliseconds, which is significantly faster compared to equivalent network architecture with cascade of convolutional and subsampling layers. The verification accuracy achieved is 3.33% EER (equal error rate). Learning converges within 20 epochs, and the proposed technique can verify a test subject unseen in training. This work shows the viability of the " Siamese " CNN for face verification applications, and further improvements to the architecture are under construction to enhance its performance.

Data Augmentation-Assisted Makeup-Invariant Face Recognition

Mathematical Problems in Engineering

Recently, face datasets containing celebrities photos with facial makeup are growing at exponential rates, making their recognition very challenging. Existing face recognition methods rely on feature extraction and reference reranking to improve the performance. However face images with facial makeup carry inherent ambiguity due to artificial colors, shading, contouring, and varying skin tones, making recognition task more difficult. The problem becomes more confound as the makeup alters the bilateral size and symmetry of the certain face components such as eyes and lips affecting the distinctiveness of faces. The ambiguity becomes even worse when different days bring different facial makeup for celebrities owing to the context of interpersonal situations and current societal makeup trends. To cope with these artificial effects, we propose to use a deep convolutional neural network (dCNN) using augmented face dataset to extract discriminative features from face images containing syn...

UFace: An Unsupervised Deep Learning Face Verification System

Electronics

Deep convolutional neural networks are often used for image verification but require large amounts of labeled training data, which are not always available. To address this problem, an unsupervised deep learning face verification system, called UFace, is proposed here. It starts by selecting from large unlabeled data the k most similar and k most dissimilar images to a given face image and uses them for training. UFace is implemented using methods of the autoencoder and Siamese network; the latter is used in all comparisons as its performance is better. Unlike in typical deep neural network training, UFace computes the loss function k times for similar images and k times for dissimilar images for each input image. UFace’s performance is evaluated using four benchmark face verification datasets: Labeled Faces in the Wild (LFW), YouTube Faces (YTF), Cross-age LFW (CALFW) and Celebrities in Frontal Profile in the Wild (CFP-FP). UFace with the Siamese network achieved accuracies of 99.4...

Evaluating face recognition with different texture descriptions and convolution neural network

Indonesian Journal of Electrical Engineering and Computer Science

Extracting the remarkable attributes of the image objects is an issue of ongoing research special in the face recognition problem. This paper presents two directions. The first is a comparison between the local binary patterns (LBP) and its modified center symmetric LBP drawn from localized facial expressions and due to the efficiency, K-nearest neighbor (KNN) and the support vector machine (SVM) techniques play significant roles in this research used to implement the proposed system efficiently. The second direction proposes an efficient architecture by depending on deep learning convolution neural network (CNN) to implement face recognition. Such a design consists of two parts: a convolutional learning feature model and a classification model. The first one learns the important feature,while the second part produces a score class for each sample input. Many experiments are implemented on the known dataset once for the number of nearest neighbors (K value), and then decrease the nu...

Pose-invariant face recognition with multitask cascade networks

Neural Computing and Applications, 2022

In this work, a face recognition method is proposed for face under pose variations using a multi-task convolutional neural network (CNN). Furthermore, a pose estimation method followed by a face identication module are combined in a cascaded structure and used separately. In the presence of various facial poses as well as low illuminations, datasets that include separated face poses can enhance the robustness of face recognition. The proposed method relies on a pose estimation module using a convolutional neural network model and trained on three categories of face image capture such as the Left side, Frontal, and right side. Second, three CNN models are used for face identication according to the estimated pose. The Left-CNN model, Front-CNN model, and Right-CNN model are used to identify the face for the left, frontal, and right pose of the face, respectively. Because face images may contain some useless information (e.g. background content), we propose a skin-based face segmentation method using structure-decomposition and the Color Invariant Descriptor. Experimental evaluation has been conducted using the proposed cascade-based face recognition system that consists of the aforementioned steps (i.e., pose estimation, face segmentation, and face identication) is assessed on four dierent datasets and its superiority has been shown over related state-of-the-art techniques. Results reveal the contribution of the separate representation, skin segmentation, and pose estimation in the recognition robustness.

A multimodal biometric database and case study for face recognition based deep learning

Bulletin of Electrical Engineering and Informatics

Recently, multimodal biometric systems have garnered a lot of interest for the identification of human identity. The accessibility of the database is one of the contributing elements that impact biometric recognition systems. In their studies, the majority of researchers concentrate on unimodal databases. There was a need to compile a fresh, realistic multimodal biometric database, nonetheless, because there were so few comparable multimodal biometric databases that were publically accessible. This study introduces the MULBv1 multimodal biometric database, which contains homologous biometric traits. The MULBv1 database includes 20 images of each person's face in various poses, facial emotions, and accessories, 20 images of their right hand from various angles, and 20 images of their right iris from various lighting positions. The database contains real multimodal data from 174 people, and all biometrics were accurately collected using the micro camera of the iPhone 14 Pro Max. A face recognition technique is also suggested as a case study using the gathered facial features. In the case study, the deep convolutional neural network (CNN) was used, and the findings were positive. Through several trials, the accuracy was (97.41%).