Home

Awesome

General Object Pose Transformation Network from Unpaired Data

Introduction

Object pose transformation is a challenging task. Yet, most existing pose transformation networks only focus on synthesizing humans. These methods either rely on the keypoints information or rely on the manual annotations of the paired target pose images for training. However, collecting such paired data is laboring and the cue of keypoints is inapplicable to general objects. In this paper, we address a problem of novel general object pose transformation from unpaired data. Given a source image of an object that provides appearance information and a desired pose image as reference in the absence of paired examples, we produce a depiction of that object in that pose, retaining the appearance of both the object and background. [paper]

Demo

Download the Bird checkpoint from here and save them in code/checkpoints/bird and execute the following command, find the results in code/output/test/bird

cd ./code
pip install -r requirements.txt
cd ./models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../
sh ./demo.sh

Training

Download the VGG checkpoint from here and save them in code/models

cd ./code
python train.py --name bird --dataset_mode bird --dataroot bird_data/ --niter 100 --niter_decay 100 --use_attention --maskmix --noise_for_mask --mask_epoch 150 --warp_mask_losstype direct --weight_mask 100.0 --PONO --PONO_C --vgg_normal_correct --batchSize 1 --gpu_ids 0

Result

<img src="./asset/horse.gif" width="50%"/><img src='./asset/sheep.gif' width="50%">

<img src="./asset/animal.png">

Acknowledgement

Our code is heavily borrowed from CoCosNet. We also thanks VTON. Many thanks for their hard work.