Home

Awesome

Monocular 3D Object Reconstruction with GAN Inversion (ECCV 2022)

This paper presents a novel GAN Inversion framework for single view 3D object reconstruction.

Setup

Install environment:

conda env create -f env.yml

# if you couldn't solve the environment:
conda env create -f env_sub.yml

conda activate mesh_inv

Install Kaolin (tested on commit e7e5131).

Download the pretrained model and place it under checkpoints_gan/pretrained. Download the CUB dataset CUB_200_2011, cache, predicted_mask, and PseudoGT for ConvMesh GAN training, and place them under datasets/cub/. Alternatively, you can obtained your own predicted mask by PointRend, and you can obtain your own PseudoGT following ConvMesh.

- datasets
  - cub
    - CUB_200_2011
    - cache
    - predicted_mask
    - pseudogt_512x512

Reconstruction

The reconstruction results of the test split is obtained through GAN inversion.

python run_inversion.py --name author_released --checkpoint_dir pretrained

Evaluation

Evaluation results can be obtained upon GAN inversion.

python run_evaluation.py --name author_released --eval_option IoU
python run_evaluation.py --name author_released --eval_option FID_1
python run_evaluation.py --name author_released --eval_option FID_12
python run_evaluation.py --name author_released --eval_option FID_10

Pretraining

You can also pretrain your own GAN from scratch.

python run_pretraining.py --name self_train --gpu_ids 0,1,2,3 --epochs 600

Acknowledgement

The code is in part built on ConvMesh, ShapeInversion and CMR. Besides, Chamfer Distance is borrowed from ChamferDistancePytorch, which is included in the lib/external folder for convenience.

Citation

@inproceedings{zhang2022monocular,
    title = {Monocular 3D Object Reconstruction with GAN Inversion},
    author = {Zhang, Junzhe and Ren, Daxuan and Cai, Zhongang and Yeo, Chai Kiat and Dai, Bo and Loy, Chen Change},
    booktitle = {ECCV},
    year = {2022}}