Home

Awesome

PoseContrast

[3DV 2021 Oral] Pytorch implementation of Class-Agnostic Object Viewpoint Estimation in the Wild with Pose-Aware Contrastive Learning paper.

Check out our Paper and Webpage for more details.

<p align="center"> <img src="./asset/TeaserIdea.png" width="45%" /> &emsp; &emsp; &emsp; <img src="./asset/PoseContrast.png" width="45%" /> </p>
@INPROCEEDINGS{Xiao2020PoseContrast,
    author    = {Yang Xiao and Yuming Du and Renaud Marlet},
    title     = {PoseContrast: Class-Agnostic Object Viewpoint Estimation in the Wild with Pose-Aware Contrastive Learning},
    booktitle = {International Conference on 3D Vision (3DV)},
    year      = {2021}
}

Installation

1. Create conda environment

conda env create -f environment.yml
conda activate PoseContrast

PyTorch version: This repo has been tested under PyTorch 1.0.0 and PyTorch 1.6.0 with similar performances.

2. Download datasets

cd ./data
bash download_data.sh

This command will download following datasets:

Note: Pix3D is only used for evaluation, while Pascal3D+ and ObjectNet3D both contain train/val split for training and testing.

3. Download pretrained models

cd ./pretrain_models
bash download_moco.sh

This command will download the MOCOv2 model released by FAIR and correct the module names.

mkdir exps/PoseContrast_Pascal3D_MOCOv2 && cd exps/PoseContrast_Pascal3D_MOCOv2
wget https://www.dropbox.com/s/mlmubnz9xgbflm4/ckpt.pth?dl=0

This command will download our class-agnostic object viewpoint estimation network trained on Pascal3D+ dataset.

The model trained on ObjectNet3D dataset can be downloaded from GoogleDrive.

How to use

1. Launch a training

./scripts/train.sh 

Training on the Pascal3D+ dataset. Models are saved at exps/PoseContrast_Pascal3D_MOCOv2, a training log file trainer.log will also be generated.


./scripts/train_object3d.sh

Training on the ObjectNet3D dataset. Models are saved at exps/PoseContrast_ObjectNet3D_ZeroShot and PoseContrast_ObjectNet3D_FewShot, respectively.

2. Evaluate on different datasets

./scripts/test.sh 

Evaluate the model on Pascal3D+ and Pix3D:


./scripts/test_object3d.sh 

Evaluate the model on the ObjectNet3D dataset and report the results for base and novel classes.

Visual Results

<p align="center"> <img src="./asset/Pix3D_view_detect.png" width="1000" /> </p>

Further information

If you like this project, please check out related works on object pose estimation from our group: