Home

Awesome

EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding (CVPR2023)

By Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, Jian Zhang*
This repo is the official implementation of "EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding". CVPR2023 | arXiv | Code

<figure> <p align="center" > <img src='./data/fig1.png' width=700 alt="Figure 1"/> </p> </figure>

0. Installation

1. [TODO] Quick visualization demo

2. Data preparation

The final required files are as follows:

├── [DATA_ROOT]
│	├── [1] train_v3scans.pkl # Packaged ScanNet training set
│	├── [2] val_v3scans.pkl   # Packaged ScanNet validation set
│	├── [3] ScanRefer/        # ScanRefer utterance data
│	│	│	├── ScanRefer_filtered_train.json
│	│	│	├── ScanRefer_filtered_val.json
│	│	│	└── ...
│	├── [4] ReferIt3D/        # NR3D/SR3D utterance data
│	│	│	├── nr3d.csv
│	│	│	├── sr3d.csv
│	│	│	└── ...
│	├── [5] group_free_pred_bboxes/  # detected boxes (optional)
│	├── [6] gf_detector_l6o256.pth   # pointnet++ checkpoint (optional)
│	├── [7] roberta-base/     # roberta pretrained language model
│	├── [8] checkpoints/      # EDA pretrained models

3. Models

DatasetmAP@0.25mAP@0.5ModelLog (train)Log (test)
ScanRefer54.5942.26OneDrive*54_59.txt<sup>1</sup> / 54_44.txt<sup>2</sup>log.txt
ScanRefer (Single-Stage)53.8341.70OneDrive53_83.txt<sup>1</sup> / 53_47.txt<sup>2</sup>log.txt
SR3D68.1-OneDrive68_1.txt<sup>1</sup> / 67_6.txt<sup>2</sup>log.txt
NR3D52.1-OneDrive52_1.txt<sup>1</sup> / 54_7.txt<sup>2</sup>log.txt

*: This model is also used to evaluate the new task of grounding without object names, with performances of 26.5% and 21.6% for acc@0.25 and acc@0.5.
1: The log of the performance we reported in the paper.
2: The log of the performance we retrain the model with this open-released repository.
Note: To find the overall performance, please refer to issue3.

4. Training

5. Evaluation

6. Acknowledgements

We are quite grateful for BUTD-DETR, GroupFree, ScanRefer, and SceneGraphParser.

7. Citation

If you find our work useful in your research, please consider citing:

@inproceedings{wu2022eda,
  title={EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual Grounding},
  author={Wu, Yanmin and Cheng, Xinhua and Zhang, Renrui and Cheng, Zesen and Zhang, Jian},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2023}
}

8. Contact

If you have any question about this project, please feel free to contact Yanmin Wu: wuyanminmax[AT]gmail.com