Awesome
UNICORN :unicorn:
<div align="center"> <h2> Share With Thy Neighbors:<br> Single-View Reconstruction by Cross-Instance Consistency <p></p><a href="https://www.tmonnier.com">Tom Monnier</a> <a href="https://techmatt.github.io/">Matthew Fisher</a> <a href="https://people.eecs.berkeley.edu/~efros/">Alexei A. Efros</a> <a href="https://imagine.enpc.fr/~aubrym/">Mathieu Aubry</a>
<p></p><a href="https://www.tmonnier.com/UNICORN/"><img src="https://img.shields.io/badge/-Webpage-blue.svg?colorA=333&logo=html5" height=35em></a> <a href="https://arxiv.org/abs/2204.10310"><img src="https://img.shields.io/badge/-Paper-blue.svg?colorA=333&logo=arxiv" height=35em></a> <a href="https://www.tmonnier.com/UNICORN/demo"><img src="https://img.shields.io/badge/-Demo-blue.svg?colorA=333&logo=googlecolab" height=35em></a> <a href="https://www.tmonnier.com/UNICORN/ref.bib"><img src="https://img.shields.io/badge/-BibTeX-blue.svg?colorA=333&logo=latex" height=35em></a>
<p></p> </h2> </div>Official PyTorch implementation of Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency (ECCV 2022). Check out our webpage for video results!
This repository contains:
- scripts to download and load both datasets and pretrained models
- demo to reconstruct cars from raw images (script or interactive notebook )
- configs to train the models from scratch
- evaluation pipelines to reproduce quantitative results
- guidelines to train a model on a new dataset
@inproceedings{monnier2022unicorn,
title={{Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency}},
author={Monnier, Tom and Fisher, Matthew and Efros, Alexei A and Aubry, Mathieu},
booktitle={{ECCV}},
year={2022},
}
</details>
<details>
<summary><b>Major code updates :clipboard:</b></summary>
- 08/22: pytorch 1.10 instead of 1.5, big models, kp_eval.py, eval with gradient-based ICP with aniso scale, pascal3D car chamfer eval
- 05/22: first code release
Installation :construction_worker:
1. Create conda environment :wrench:
conda env create -f environment.yml
conda activate unicorn
<details>
<summary><b>Optional visualization :chart_with_downwards_trend:</b></summary>
Some monitoring routines are implemented, you can use them by specifying your
visdom port in the config file. You will need to install visdom from source beforehand:
git clone https://github.com/facebookresearch/visdom
cd visdom && pip install -e .
</details>
2. Download datasets :arrow_down:
bash scripts/download_data.sh
This command will download one of the following datasets:
ShapeNet NMR
: paper / NMR paper / dataset (33Go, thanks to the DVR team for hosting the data)CUB-200-2011
: paper / webpage / dataset (1Go)Pascal3D+ Cars
: paper / webpage (with ftp download link, 7.5Go) / UCMR annotations (bbox + train/test split, thanks to the UCMR team for hosting them) / UNICORN annotations(3D shape ground-truth)CompCars
: paper / webpage / dataset (12Go, thanks to the GIRAFFE team for hosting the data)LSUN
: paper / webpage / horse dataset (69Go) / moto dataset (42Go)
3. Download pretrained models :arrow_down:
bash scripts/download_model.sh
We provide a small (200Mo) and a big (600Mo) version for each pretrained model (see training section for details). The command will download one of the following models:
car
trained on CompCars: car.pkl / car_big.pklcar_p3d
trained on Pascal3D+: car_p3d.pkl / car_p3d_big.pklbird
trained on CUB: bird.pkl / bird_big.pklmoto
trained on LSUN Motorbike: moto.pkl / moto_big.pklhorse
trained on LSUN Horse: horse.pkl / horse_big.pklsn_*
trained on each ShapeNet category: airplane, bench, cabinet, car, chair, display, lamp, phone, rifle, sofa, speaker, table, vesselsn_big_*
trained on each ShapeNet category: airplane, bench, cabinet, car, chair, display, lamp, phone, rifle, sofa, speaker, table, vessel
- :exclamation:<b>These small models correspond to an old version of the code</b>, with in particular less training iterations. We release them for retrocompatibility and completeness, retrain from scratch for a thorough comparison.
- it may happen that
gdown
hangs, if so you can download them manually with the gdrive links and move them to themodels
folder.
How to use :rocket:
1. Demo - 3D reconstruction of car images :oncoming_automobile:
You first need to download the car model (see above), then launch:
cuda=gpu_id model=car_big.pkl input=demo ./scripts/reconstruct.sh
where gpu_id
is a target cuda device id, car_big.pkl
corresponds to a pretrained model, demo
is a folder containing the target images.
Reconstruction results (.obj + gif) will be saved in a folder demo_rec
.
We also provide an interactive demo to reconstruct cars from single images.
2. Train models from scratch :runner:
To launch a training from scratch, run:
cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh
where gpu_id
is a device id, filename.yml
is a config in configs
folder, run_tag
is a tag for the experiment.
Results are saved at runs/${DATASET}/${DATE}_${run_tag}
where DATASET
is the dataset name
specified in filename.yml
and DATE
is the current date in mmdd
format.
Available configs are:
sn/*.yml
,sn_big/*.yml
for each ShapeNet categorycar.yml
,car_big.yml
for CompCars datasetcub.yml
,cub_big.yml
for CUB-200 datasethorse.yml
,horse_big.yml
for LSUN Horse datasetmoto.yml
,horse_big.yml
for LSUN Motorbike datasetp3d_car.yml
,p3d_car_big.yml
for Pascal3D+ Car dataset
:exclamation:NB: we advocate to always check the results after the first stage. In particular for cases like birds or horses, learning can fall into bad minima with bad prototypical shapes. If so, relaunch with a different seed.
</details> <details> <summary><b>Small vs big model :muscle:</b></summary>We provide two configs to train a small and a big version of the model. Both versions give great results, the main benefit of the bigger model is slightly more detailed textures. The architecture differences are:
- a shared backbone vs separate backbones
- 32/128/128 vs 64/512/256 code sizes for shape/texture/background
- 16 vs 64 minimal number of channels in the generators
For faster experiments and prototyping, <b>we advocate the training of the small version</b>.
</details> <details> <summary><b>Computational cost :moneybag:</b></summary>On a single GPU, the approximate training times are:
- roughly 3 days for ShapeNet on a V100
- roughly 10 days for real-image datasets on a 2080Ti
3. Reproduce our quantitative results :bar_chart:
A model is evaluated at the end of training. To evaluate a pretrained model (e.g. sn_big_airplane.pkl
):
- move the model to a fake folder and rename it
model.pkl
(e.g. inruns/shapenet_nmr/airplane_big
) - point to the fake tag to resume from in the config (e.g.
resume: airplane_big
inairplane.yml
) - launch the training (and thus evaluation) with:
cuda=gpu_id config=sn_big/airplane.yml tag=airplane_big_eval ./scripts/pipeline.sh
<details>
<summary><b>Chamfer-L1 scores on ShapeNet :triangular_ruler:</b></summary>
airplane | bench | cabinet | car | chair | display | lamp | phone | rifle | sofa | speaker | table | vessel | mean |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.110 | 0.159 | 0.137 | 0.168 | 0.253 | 0.220 | 0.523 | 0.127 | 0.097 | 0.192 | 0.224 | 0.243 | 0.155 | 0.201 |
For CUB, the built-in evaluation included in the training pipeline is Mask-IoU. To evaluate PCK, run:
cuda=gpu_id tag=run_tag ./scripts/kp_eval.sh
4. Train on a custom dataset :crystal_ball:
If you want to learn a model for a custom object category, here are the key things you need to do:
- put your images in a
custom_name
folder inside thedatasets
folder - edit the config file
custom.yml
(orcustom_big.yml
) in the configs folder: this includes changing the dataset name tocustom_name
and setting all training milestones - launch training with:
cuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh
Further information :books:
If you like this project, check out related works from our group:
- Loiseau et al. - Representing Shape Collections with Alignment-Aware Linear Models (3DV 2021)
- Monnier et al. - Unsupervised Layered Image Decomposition into Object Prototypes (ICCV 2021)
- Monnier et al. - Deep Transformation Invariant Clustering (NeurIPS 2020)
- Deprelle et al. - Learning elementary structures for 3D shape generation and matching (NeurIPS 2019)
- Groueix et al. - AtlasNet: A Papier-Mache Approach to Learning 3D Surface Generation (CVPR 2018)