Home

Awesome

Situational Awareness Matters in 3D Vision Language Reasoning

<a href="https://yunzeman.github.io/" style="color:blue;">Yunze Man</a> · <a href="https://cs.illinois.edu/about/people/department-faculty/lgui" style="color:blue;">Liang-Yan Gui</a> · <a href="https://yxw.web.illinois.edu/" style="color:blue;">Yu-Xiong Wang</a>

[CVPR 2024] [Project Page] [arXiv] [pdf] [BibTeX]

Framework: PyTorch arXiv Project YouTube GitHub License

This repository contains the official PyTorch implementation of the paper "Situational Awareness Matters in 3D Vision Language Reasoning" (CVPR 2024). The paper is available on arXiv. The project page is online at here.

About

<img src="assets/SIG3D.png" width="100%"/> Previous methods perform direct 3D vision language reasoning without modeling the situation of an embodied agent in the 3D environment. Our method, SIG3D, grounds the situational description in the 3D space, and then re-encodes the visual tokens from the agent's intended perspective before vision-language fusion, resulting in a more comprehensive and generalized 3D vision language (3DVL) representation and reasoning pipeline.

Environmet Setup and Dataset Preparation

Please install the required packages and dependencies according to the environment.yml.

In addition,

Finally, please download the ScanNet dataset from the official website and follow the instructions here to preprocess the ScanNet dataset and get RGB video frames and point clouds for each scannet scene.

BibTeX

If you use our work in your research, please cite our publication:

@inproceedings{man2024situation3d,
      title={Situational Awareness Matters in 3D Vision Language Reasoning},
      author={Man, Yunze and Gui, Liang-Yan and Wang, Yu-Xiong},
      booktitle={CVPR},
      year={2024} 
      }

Acknowledgements

This repo is built based on the fantastic work SQA3D, ScanQA, and 3D-LLM. We thank the authors for their great work and open-sourcing their codebase.