Awesome
PoseContrast
[3DV 2021 Oral] Pytorch implementation of Class-Agnostic Object Viewpoint Estimation in the Wild with Pose-Aware Contrastive Learning paper.
Check out our Paper and Webpage for more details.
<p align="center"> <img src="./asset/TeaserIdea.png" width="45%" />       <img src="./asset/PoseContrast.png" width="45%" /> </p>@INPROCEEDINGS{Xiao2020PoseContrast,
author = {Yang Xiao and Yuming Du and Renaud Marlet},
title = {PoseContrast: Class-Agnostic Object Viewpoint Estimation in the Wild with Pose-Aware Contrastive Learning},
booktitle = {International Conference on 3D Vision (3DV)},
year = {2021}
}
Installation
1. Create conda environment
conda env create -f environment.yml
conda activate PoseContrast
PyTorch version: This repo has been tested under PyTorch 1.0.0 and PyTorch 1.6.0 with similar performances.
2. Download datasets
cd ./data
bash download_data.sh
This command will download following datasets:
Pascal3D+
(link to the original dataset page. ImageNet and PascalVOC images picturing 12 rigid object classes in the wild)Pix3D
(link to the original dataset page. Mixed-sources images picturing 9 rigid object classes including "tools" and "misc".)ObjectNet3D
(link to the original dataset page. ImageNet images picturing 100 rigid object classes.)
Note: Pix3D is only used for evaluation, while Pascal3D+ and ObjectNet3D both contain train/val split for training and testing.
3. Download pretrained models
cd ./pretrain_models
bash download_moco.sh
This command will download the MOCOv2 model released by FAIR and correct the module names.
mkdir exps/PoseContrast_Pascal3D_MOCOv2 && cd exps/PoseContrast_Pascal3D_MOCOv2
wget https://www.dropbox.com/s/mlmubnz9xgbflm4/ckpt.pth?dl=0
This command will download our class-agnostic object viewpoint estimation network trained on Pascal3D+ dataset.
The model trained on ObjectNet3D dataset can be downloaded from GoogleDrive.
PoseContrast_ObjectNet3D_ZeroShot
contains the trained model for the first base-training stage, where the model is trained only on the 80 base classes.PoseContrast_ObjectNet3D_FewShot
contains the trained model for the second fine-tuning stage, where the previous model is fine-tuned on both the 80 base classes and the 20 novel classes.
How to use
1. Launch a training
./scripts/train.sh
Training on the Pascal3D+ dataset.
Models are saved at exps/PoseContrast_Pascal3D_MOCOv2
, a training log file trainer.log
will also be generated.
./scripts/train_object3d.sh
Training on the ObjectNet3D dataset. Models are saved at exps/PoseContrast_ObjectNet3D_ZeroShot
and PoseContrast_ObjectNet3D_FewShot
, respectively.
2. Evaluate on different datasets
./scripts/test.sh
Evaluate the model on Pascal3D+ and Pix3D:
prediction
folder would be created to save the predicted viewpoints,correlation
folder would be created to save the angle classification scores and viewpoint estimation errors,- testing log file
tester.log
would be generated saving the quantitative evaluation results.
./scripts/test_object3d.sh
Evaluate the model on the ObjectNet3D dataset and report the results for base and novel classes.
Visual Results
<p align="center"> <img src="./asset/Pix3D_view_detect.png" width="1000" /> </p>Further information
If you like this project, please check out related works on object pose estimation from our group: