Home

Awesome

SieveNet

Python 3.6 License: MIT

This is the unofficial implementation of 'SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On' </br> Paper can be found from here

Dataset downloading and processing

Dataset download instructions and link of dataset can be found from official repo of CP-VTON and VITON </br> Put dataset in data folder

Usage

Clone the repo and install requirements through pip install -r requirements.txt

Traning

Coarse-to-Fine Warping module

     In config.py set self.datamode='train' and self.stage='GMM' </br>      then run python train.py </br> You can observe results while traning in tensorboard as below </br> SS from tensorboard while training gmm

Conditional Segmentation Mask generation module

     In config.py set self.datamode='Train' and self.stage='SEG' </br>      then run python train.py </br> SS from tensorboard while training segm

Segmentation Assisted Texture Translation module

     In config.py set self.datamode='Train' and self.stage='TOM' </br>      then run python train.py </br> SS from tensorboard while training tom

Testing on dataset

Please download checkpoint of all three modules from google drive and put them in checkpoints folder </br> For testing, in config.py set self.datamode='test' </br> For Testing of Coarse-to-Fine Warping module, Conditional Segmentation Mask generation module, and Segmentation Assisted Texture Translation module set self.stage='GMM', self.stage='SEG', and self.stage='TOM' respectively. </br> Here is testing result. For Coarse-to-Fine Warping module, </br> SS from tensorboard while testing gmm </br>For Segmentation Assisted Texture Translation module, </br> SS from tensorboard while testing gmm

Testing on custom image

  1. Please download checkpoint of all three modules from google drive and put them in checkpoints folder.
  2. Please download caffe-model from here and put the model in pose folder. </br>
  3. Generate human parsing from Self-Correction-Human-Parsing repo or from this colab demo. </br> Select LIP dataset while generating human parsing.</br>
  4. Set input-image's, cloth-image's, and output of human parsing image's path in config file.</br>
  5. Then run python inference.py Output will be saved in outputs folder.

Update: Inference using colab

Please find inference code of Sievenet in 2nd part of this notebook

Open In Colab

Acknowledgements

Some modules of this implementation is based on this repo</br> For generating pose keypoints, I have used learnopencv implementation of OpenPose