Spatially-invariant Style-codes Controlled Makeup Transfer (original) (raw)

Supervised Learning for Makeup Style Transfer

CSRN

This paper addresses the problem of using deep learning for makeup style transfer. For solving this problem, we propose a new supervised method. Additionally, we present a technique for creating a synthetic dataset for makeup transfer used to train our model. The obtained results were compared with six popular methods for makeup transfer using three metrics. The tests were carried out on four available data sets. The proposed method, in many respects, is competitive with the methods used in the literature. Thanks to images of faces with generated synthetic makeup, the proposed method learns to better transfer details, and the learning process is significantly accelerated.

Makeup Style Transfer on Low-quality Images with Weighted Multi-scale Attention

2020 25th International Conference on Pattern Recognition (ICPR)

Facial makeup style transfer is an extremely challenging sub-field of image-to-image-translation. Due to this difficulty, state-of-the-art results are mostly reliant on the Face Parsing Algorithm, which segments a face into parts in order to easily extract makeup features. However, this algorithm can only work well on high-definition images where facial features can be accurately extracted. Faces in many real-world photos, such as those including a large background or multiple people, are typically of low-resolution, which considerably hinders stateof-the-art algorithms. In this paper, we propose an end-to-end holistic approach to effectively transfer makeup styles between two low-resolution images. The idea is built upon a novel weighted multi-scale spatial attention module, which identifies salient pixel regions on low-resolution images in multiple scales, and uses channel attention to determine the most effective attention map. This design provides two benefits: low-resolution images are usually blurry to different extents, so a multi-scale architecture can select the most effective convolution kernel size to implement spatial attention; makeup is applied on both a macro-level (foundation, fake tan) and a micro-level (eyeliner, lipstick) so different scales can excel in extracting different makeup features. We develop an Augmented CycleGAN network that embeds our attention modules at selected layers to most effectively transfer makeup. Our system is tested with the FBD data set, which consists of many low-resolution facial images, and demonstrate that it outperforms state-of-the-art methods, particularly in transferring makeup for blurry images and partially occluded images.

Digital Makeup Face Generation

2015

Make up applications offer photoshop tools to get users inputs in generating a make up face. While this has been a well established area of development, automatic transfer of makeup from a reference to a target image presents more challenges. This project applies the make up from a reference image to a target on a pixel by pixel basis. The reference and target were aligned by warping using 77 detected face landmarks. The images were decomposed into large scale layer, detail layer and color layer through which makeup highlights and color information were transferred by Poisson editing, weighted means and alpha blending. The test results showed that the techniques work well with reference images of a different age, a different skin color and even a hand-sketched reference image.

Semi-Supervised Eye Makeup Transfer by Swapping Learned Representation

2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)

This paper introduces an autoencoder structure to transfer the eye makeup from an arbitrary reference image to a source image realistically and faithfully using both synthetic paired data and unpaired data in a semi-supervised way. Different from the image domain transfer problem, our framework only needs one domain entity and follows an "encoding-swap-decoding" process. Makeup transfer is achieved by decoding the base representation from a source image and makeup representation from a reference image. Moreover, our method allows users to control the makeup degree by tuning makeup weight. To the best of our knowledge, there is no public large makeup dataset to evaluate data-driven approaches. We have collected a dataset of non-makeup images and with-makeup images of various eye makeup styles. Experiments demonstrate the effectiveness of our method with the state-of-the-art methods both qualitatively and quantitatively.

Stylizing face images via multiple exemplars

Computer Vision and Image Understanding

We address the problem of transferring the style of a headshot photo to face images. Existing methods using a single exemplar lead to inaccurate results when the exemplar does not contain sufficient stylized facial components for a given photo. In this work, we propose an algorithm to stylize face images using multiple exemplars containing different subjects in the same style. Patch correspondences between an input photo and multiple exemplars are established using a Markov Random Field (MRF), which enables accurate local energy transfer via Laplacian stacks. As image patches from multiple exemplars are used, the boundaries of facial components on the target image are inevitably inconsistent. The artifacts are removed by a post-processing step using an edge-preserving filter. Experimental results show that the proposed algorithm consistently produces visually pleasing results.

Dress-up: deep neural framework for image-based human appearance transfer

Multimedia Tools and Applications

The fashion industry is at the brink of radical transformation. The emergence of Artificial Intelligence (AI) in fashion applications creates many opportunities for this industry and make fashion a better space for everyone. Interesting to this matter, we proposed a virtual try-on interface to stimulate consumers purchase intentions and facilitate their online buying decision process. Thus, we present, in this paper, our flexible person generation system for virtual try-on that aiming to treat the task of human appearance transfer across images while preserving texture details and structural coherence of the generated outfit. This challenging task has drawn increasing attention and made huge development of intelligent fashion applications. However, it requires different challenges, especially in the case of a wide divergences between the source and target images. To solve this problem, we proposed a flexible person generation framework called Dress-up to treat the 2D virtual try-on task. Dress-up is an end-to-end generation pipeline with three modules based on the task of image-toimage translation aiming to sequentially interchange garments between images, and produce dressing effects not achievable by existing works. The core idea of our solution is to explicitly encode the body pose and the target clothes by a pre-processing module based on the semantic segmentation process. Then, a conditional adversarial network is implemented to generate target segmentation feeding respectively, to the alignment and translation networks to generate the final output results. The novelty of this work lies in realizing the appearance transfer across images with high quality by reconstructing garments on a person in different orders and looks from simlpy semantic maps and 2D images without using 3D modeling. Our system can produce dressing effects and provide significant results over the state-ofthe-art methods on the widely used DeepFashion dataset. Extensive evaluations show that Dress-up outperforms other recent methods in terms of output quality, and handles a wide range of editing functions for which there is no direct supervision. Different types of results were computed to verify the performance of our proposed framework and show that the robustness and effectiveness are high by utilizing our method.

FacialGAN: Style Transfer and Attribute Manipulation on Synthetic Faces

2021

Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-...

Human Appearance Transfer

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

We propose an automatic person-to-person appearance transfer model based on explicit parametric 3d human representations and learned, constrained deep translation network architectures for photographic image synthesis. Given a single source image and a single target image, each corresponding to different human subjects, wearing different clothing and in different poses, our goal is to photorealistically transfer the appearance from the source image onto the target image while preserving the target shape and clothing segmentation layout. Our solution to this new problem is formulated in terms of a computational pipeline that combines (1) 3d human pose and body shape estimation from monocular images, (2) identifying 3d surface colors elements (mesh triangles) visible in both images, that can be transferred directly using barycentric procedures, and (3) predicting surface appearance missing in the first image but visible in the second one using deep learning-based image synthesis techniques. Our model achieves promising results as supported by a perceptual user study where the participants rated around 65% of our results as good, very good or perfect, as well in automated tests (Inception scores and a Faster-RCNN human detector responding very similarly to real and model generated images). We further show how the proposed architecture can be profiled to automatically generate images of a person dressed with different clothing transferred from a person in another image, opening paths for applications in entertainment and photo-editing (e.g. embodying and posing as friends or famous actors), the fashion industry, or affordable online shopping of clothing.

Unified Application of Style Transfer for Face Swapping and Reenactment

2020

Face reenactment and face swap have gained a lot of attention due to their broad range of applications in computer vision. Although both tasks share similar objectives (e.g. manipulating expression and pose), existing methods do not explore the benefits of combining these two tasks. In this paper, we introduce a unified end-to-end pipeline for face swapping and reenactment. We propose a novel approach to isolated disentangled representation learning of specific visual attributes in an unsupervised manner. A combination of the proposed training losses allows us to synthesize results in a one-shot manner. The proposed method does not require subject-specific training. We compare our method against state-of-the-art methods for multiple public datasets of different complexities. The proposed method outperforms other SOTA methods in terms of realistic-looking face images.