Home

Awesome

SNARE Dataset

SNARE dataset and code for MATCH and LaGOR models.

Paper and Citation

Language Grounding with 3D Objects

@article{snare,
  title={Language Grounding with {3D} Objects},
  author={Jesse Thomason and Mohit Shridhar and Yonatan Bisk and Chris Paxton and Luke Zettlemoyer},
  journal={arXiv},
  year={2021},
  url={https://arxiv.org/abs/2107.12514}
}

Installation

Clone

$ git clone https://github.com/snaredataset/snare.git

$ virtualenv -p $(which python3) --system-site-packages snare_env # or whichever package manager you prefer
$ source snare_env/bin/activate

$ pip install --upgrade pip
$ pip install -r requirements.txt

Edit root_dir in cfgs/train.yaml to reflect your working directory.

Download Data and Checkpoints

Download pre-extracted image features, language features, and pre-trained checkpoints from here and put them in the data/ folder.

Usage

Zero-shot CLIP Classifier

$ python train.py train.model=zero_shot_cls train.aggregator.type=maxpool 

MATCH

$ python train.py train.model=single_cls train.aggregator.type=maxpool 

LaGOR

$ python train.py train.model=rotator train.aggregator.type=two_random_index train.lr=5e-5 train.rotator.pretrained_cls=<path_to_pretrained_single_cls_ckpt>

Scripts

Run scripts/train_classifiers.sh and scripts/train_rotators.sh to reproduce the results from the paper.

To train the rotators, edit scripts/train_rotators.sh and replace the PRETRAINED_CLS with the path to the checkpoint you wish to use to train the rotator:

PRETRAINED_CLS="<root_path>/clip-single_cls-random_index/checkpoints/<ckpt_name>.ckpt'"

Preprocessing

If you want to extract CLIP vision and language features from raw images:

  1. Download models-screenshot.zip from ShapeNetSem, and extract it inside ./data/.
  2. Edit and run python scripts/extract_clip_features.py to save shapenet-clipViT32-frames.json.gz and langfeat-512-clipViT32.json.gz

Leaderboard

Please send your ...test.json prediction results to Mohit Shridhar. We will get back to you as soon as possible.

Instructions:

Rankings:

RankModelAllVisualBlind
1DA4LG <br>(Anonymous)<br>5 Feb 202481.988.575.0
2MAGiC <br>(Mitra et al.)<br>8 Jun 202381.787.775.4
3DA4LG <br>(Anonymous)<br>27 Jan 202480.987.773.7
4VLG <br>(Corona et al.)<br>15 Mar 202279.086.071.7
5LOCKET <br>(Anonymous)<br>14 Oct 202279.086.171.5
6VLG <br>(Corona et al.)<br>13 Nov 202178.785.871.3
7LOCKET <br>(Anonymous)<br>23 Oct 202277.785.569.5
8LAGOR <br>(Thomason et. al)<br>15 Sep 202177.084.369.4
9MATCH <br>(Thomason et. al)<br>15 Sep 202176.483.768.7