Home

Awesome

Multi-View Transformer for 3D Visual Grounding

Multi-View Transformer for 3D Visual Grounding [CVPR 2022] MVT

<!-- For ScanRefer dataset, please refer [MVT_ScanRefer](https://github.com/sega-hsj/MVT_ScanRefer) -->

Installation and Data Preparation

Please refer the installation and data preparation from referit3d.

We adopt bert-base-uncased from huggingface, which can be installed using pip as follows:

pip install transformers

you can download the pretrained weight in this page, and put them into a folder, noted as PATH_OF_BERT.

Training

    python referit3d/scripts/train_referit3d.py \
    -scannet-file $PATH_OF_SCANNET_FILE$ \
    -referit3D-file $PATH_OF_REFERIT3D_FILE$ \
    --bert-pretrain-path $PATH_OF_BERT$ \
    --log-dir logs/MVT_nr3d \
    --n-workers 8 \
    --model 'referIt3DNet_transformer' \
    --unit-sphere-norm True \
    --batch-size 24 \
    --encoder-layer-num 3 \
    --decoder-layer-num 4 \
    --decoder-nhead-num 8 \
    --gpu "0" \
    --view_number 4 \
    --rotate_number 4 \
    --label-lang-sup True
    --augment-with-sr3d sr3d_dataset_file.csv

Validation

Test

    python referit3d/scripts/train_referit3d.py \
    --mode evaluate \
    -scannet-file $PATH_OF_SCANNET_FILE$ \
    -referit3D-file $PATH_OF_REFERIT3D_FILE$ \
    --bert-pretrain-path $PATH_OF_BERT$ \
    --log-dir logs/MVT_nr3d \
    --resume-path $the_path_to_the_model.pth$ \
    --n-workers 8 \
    --model 'referIt3DNet_transformer' \
    --unit-sphere-norm True \
    --batch-size 24 \
    --encoder-layer-num 3 \
    --decoder-layer-num 4 \
    --decoder-nhead-num 8 \
    --gpu "0" \
    --view_number 4 \
    --rotate_number 4 \
    --label-lang-sup True
    --augment-with-sr3d sr3d_dataset_file.csv

For ScanRefer dataset, please refer MVT_ScanRefer.

Citation

@inproceedings{huang2022multi,
  title={Multi-View Transformer for 3D Visual Grounding},
  author={Huang, Shijia and Chen, Yilun and Jia, Jiaya and Wang, Liwei},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={15524--15533},
  year={2022}
}

Credits

The project is built based on the following repository: