Awesome
Virtual Try-On with Garment-Pose Keypoints Guided Inpainting
This codes repository provides the pytorch implementation of the KGI virtual try-on method proposed in ICCV23 paper Virtual Try-On with Garment-Pose Keypoints Guided Inpainting.
Experimental Environment
Please follow the steps below to build the environment and install the required packages.
conda create -n kgi python=3.8 -y
conda activate kgi
bash install_pkgs.sh
Data Preparation
-
The VITON-HD dataset could be downloaded from VITON-HD . Please place the dataset under the directory
KGI/data/
. The dataset contains the following content:Content Comment agnostic-mask not in use in KGI agnostic-v3.2 not in use in KGI cloth cloth-mask not in use in KGI image image-densepose not in use in KGI image-parse-agnostic-v3.2 not in use in KGI image-parse-v3 openpose_img not in use in KGI openpose_json -
In addition to above content, some other preprocessed conditions are in use in KGI. The content are generated with the data preprocessing codes [WIP]. The preprocessed data could be downloaded, respectively.
Content Train Test image-landmark-json Google Drive Google Drive cloth-landmark-json Google Drive Google Drive label Google Drive Google Drive parse Google Drive Google Drive parse_ag_full Google Drive Google Drive ag_mask Google Drive Google Drive skin_mask Google Drive Google Drive - Data Preprocessing [WIP]
-
Download the demo_paired_pairs.txt and demo_unpaired_pairs.txt under the directory
KGI/data/zalando-hd-resized/
for in-training visualization. -
The structure of processed dataset should be as below:
- KGI/data/zalando-hd-resized/
- test/
- ag_mask/
- cloth/
- cloth-landmark-json/
- image/
- image-landmark-json/
- image-parse-v3/
- openpose_json/
- parse/
- parse_ag_full/
- skin_mask/
- label.json
- train/
- ...
- demo_paired_pairs.txt
- demo_unpaired_pairs.txt
- test_pairs.txt
- train_pairs.txt
- test/
- KGI/data/zalando-hd-resized/
Model Training
The model training of the KGI method consists of three steps: Training of the Keypoints Generator, Training of the Parse Generator, Training of the Semantic Conditioned Inpainting Model.
Keypoints Generator
- The Keypoints Generator is trained with the following scripts:
During the training, the visualization of some validation samples will be saved under directorycd codes_kg python3 train_kg.py
KGI/visualizations/two_graph_cs/
. Below is an example of visualization results. The pretrianed checkpoints of Keypoints Generator could be downloaded from Google Drive and put under the directoryKGI/checkpoints_pretrained/kg/
. - Since the parse generation is based on the estimated keypoints conditions, please generate the keypoints conditions with the following scripts before the training of Parse Generator:
The generated keypoints conditions will be saved under the directorybash generate_kg_demo_paired.sh bash generate_kg_demo_unpaired.sh bash generate_kg_train.sh bash generate_kg_test_paired.sh bash generate_kg_test_unpaired.sh
KGI/example/generate_kg/
and also could be downloaded from train_kg_conditions and test_kg_conditions. The files should be placed under the directoriesKGI/data/zalando-hd-resized/train/
andKGI/data/zalando-hd-resized/test/
, respectively.
Parse Generator
- The Parse Generator is trained with the following scripts:
During the training, the visualization of some validation samples will be saved under directorycd codes_pg python3 train_pg.py
KGI/visualizations/parse_full/
. Below is an example of visualization results. The pretrianed checkpoints of Parse Generator could be downloaded from Google Drive and put under the directoryKGI/checkpoints_pretrained/pg/
. - After the training of the Parse Generator, the person image parse (estimated segmentation map) could be generated with the following scripts:
The generated parse conditions will be saved under the directorybash generate_pg_demo_paired.sh bash generate_pg_demo_unpaired.sh bash generate_pg_test_paired.sh bash generate_pg_test_unpaired.sh
KGI/example/generate_pg/
and also could be downloaded from test_pg_conditions. The files should be placed under the directoryKGI/data/zalando-hd-resized/test/
for tps conditions and final results generation.
Semantic Conditioned Inpainting Model
- The Semantic Conditioned Inpainting Model is trained with the following scripts:
The pretrianed checkpoints of Semantic Conditioned Inpainting Model could be downloaded from Google Drive and put under the directorycd codes_sdm python3 train_sdm.py
KGI/checkpoints_pretrained/sci/ckpt_1024/
. - The tps conditions (recomposed person image and content keeping mask) could be generated with the following scripts:
The generated tps conditions will be saved under the directorycd codes_tps bash generate_tps_demo_paired.sh bash generate_tps_demo_unpaired.sh bash generate_tps_test_paired.sh bash generate_tps_test_unpaired.sh
KGI/example/generate_tps/
and also could be downloaded from test_tps_conditions. The files should be placed under the directoryKGI/data/zalando-hd-resized/test/
for final results generation with semantic conditioned inpainting. - With the trained Semantic Conditioned Inpainting Model and tps conditions, the final results could be generated with the following scripts:
cd codes_sci bash generate_sci_demo_paired.sh bash generate_sci_demo_unpaired.sh bash generate_sci_test_paired.sh bash generate_sci_test_unpaired.sh
Demo with Pretrained Model
With the pretrained models, the final try-on results and the visualizations of the intermediate results could be generated with the following demo scripts:
python3 generate_demo.py
The final try-on results will be saved under KGI/example/generate_demo/final_results/
and the visualizations of the intermediate results will be saved under KGI/example/generate_demo/vis/
. Below is an example of demo results.
Acknowledgement and Citations
- The implementation of Keypoints Generator is based on codes repo SemGCN.
- The implementation of Semantic Conditioned Inpainting Model is based on semantic-diffusion-model and RePaint.
- The implementation of datasets and dataloader is based on codes repo HR-VITON.
- If you find our work is useful, please use the following citation:
@InProceedings{Li_2023_ICCV, author = {Li, Zhi and Wei, Pengfei and Yin, Xiang and Ma, Zejun and Kot, Alex C.}, title = {Virtual Try-On with Pose-Garment Keypoints Guided Inpainting}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22788-22797} }