Home

Awesome

Table of Content
  1. Introduction
  2. Datasets
  3. Getting Started
  4. Training & Evaluation

CPM: Color-Pattern Makeup Transfer

šŸ“¢ New: We provide āQualitative Performane Comparisonsāž online! Check it out!

teaser.png
CPM can replicate both colors and patterns from a reference makeup style to another image.

Details of the dataset construction, model architecture, and experimental results can be found in our following paper:

@inproceedings{m_Nguyen-etal-CVPR21,
ā€ƒ author = {Thao Nguyen and Anh Tran and Minh Hoai},
ā€ƒ title = {Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup Transfer},
ā€ƒ year = {2021},
ā€ƒ booktitle = {Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)}
}

Please CITE our paper whenever our datasets or model implementation is used to help produce published results or incorporated into other software.

Open In Colab - arXiv - project page


Datasets

We introduce āœØ 4 new datasets: CPM-Real, CPM-Synt-1, CPM-Synt-2, and Stickers datasets. Besides, we also use published LADN's Dataset & Makeup Transfer Dataset.

CPM-Real and Stickers are crawled from Google Image Search, while CPM-Synt-1 & 2 are built on Makeup Transfer and Stickers. (Click on dataset name to download)

Name#imgsDescription-
CPM-Real3895real - makeup stylesCPM-Real.png
CPM-Synt-15555synthesis - makeup images with pattern segmentation mask./imgs/CPM-Synt-1.png
CPM-Synt-21625synthesis - triplets: makeup, non-makeup, ground-truth./imgs/CPM-Synt-2.png
Stickers577high-quality images with alpha channelStickers.png

Dataset Folder Structure can be found here.

By downloading these datasets, USER agrees:


Getting Started

Requirements
Installation
# clone the repo
git clone https://github.com/VinAIResearch/CPM.git
cd CPM

# install dependencies
conda env create -f environment.yml
Download pre-trained models
Usage

āž”ļø You can now try it in Google Colab Open in Colab

# Color+Pattern: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png

# Color Only: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --color_only

# Pattern Only: 
CUDA_VISIBLE_DEVICES=0 python main.py --style ./imgs/style-1.png --input ./imgs/non-makeup.png --pattern_only

Result image will be saved in result.png

<div style="align: left; text-align:center;"> <img src="./result.png" alt="result" width="250"/> <div class="caption">From left to right: Style, Input & Output</div> </div>

Training and Evaluation

As stated in the paper, the Color Branch and Pattern Branch are totally independent. Yet, they shared the same workflow:

  1. Data preparation: Generating texture_map of faces.

  2. Training

Please redirect to Color Branch or Pattern Branch for further details.


šŸŒæ If you have trouble running the code, please read Trouble Shooting before creating an issue. Thank you šŸŒæ

Trouble Shooting
  1. [Solved] ImportError: libGL.so.1: cannot open shared object file: No such file or directory:

    sudo apt update
    sudo apt install libgl1-mesa-glx
    
  2. [Solved] RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution) Add CUDA VISIBLE DEVICES before .py. Ex:

    CUDA_VISIBLE_DEVICES=0 python main.py
    
  3. [Solved] RuntimeError: cuda runtime error (999) : unknown error at /opt/conda/conda-bld/pytorch_1595629403081/work/aten/src/THC/THCGeneral.cpp:47

    sudo rmmod nvidia_uvm
    sudo modprobe nvidia_uvm
    
<!-- **Lipstick ain't enough: Beyond Color Matching for In-the-Wild Makeup Transfer**. \ T. Nguyen, A. Tran, M. Hoai (2021) \ IEEE Conference on Computer Vision and Pattern Recognition (CVPR). -->
Docker file
docker build -t name .