Home

Awesome

πŸ” FIND: Interfacing Foundation Models' Embeddings

:grapes: [Read our arXiv Paper] Β  :apple: [Try our Demo] Β  :orange: [Walk through Project Page]

We introduce FIND that can INterfacing Foundation models' embeDDings in an interleaved shared embedding space. Below is a brief introduction to the generic and interleave tasks we can do!

by Xueyan Zou, Linjie Li, Jianfeng Wang, Jianwei Yang, Mingyu Ding, Junyi Wei, Zhengyuan Yang, Feng Li, Hao Zhang, Shilong Liu, Arul Aravinthan, Yong Jae Lee*, Lijuan Wang*,

** Equal Advising **

FIND design

:rocket: Updates

:bookmark_tabs: Catalog

:hammer: Getting Started

<details open> <summary>Install Conda</summary> <pre> sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh bash ~/miniconda.sh -b -p $HOME/miniconda eval "$($HOME/miniconda/bin/conda shell.bash hook)" conda init conda init zsh </pre> </details>

Build Environment

conda create --name find python=3.10
conda activate find
conda install -c conda-forge mpi4py
conda install -c conda-forge cudatoolkit=11.7
conda install -c nvidia/label/cuda-11.7.0 cuda-toolkit
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2+cu117 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r assets/requirements/requirements.txt
pip install -r assets/requirements/requirements_custom.txt
cd modeling/vision/encoder/ops
sh make.sh
cd ../../..

Build Dataset

Explore through πŸ€— Hugging Face: FIND-Bench.

Download Raw File:

entity_train2017.jsonentity_val2017.jsonentity_val2017_long.json
downloaddownloaddownload
<details open> <summary>Data Structure</summary> <pre> data/ └── coco/ β”œβ”€β”€ annotations/ β”‚ β”œβ”€β”€ entity_train2017.json β”‚ β”œβ”€β”€ *entity_val2017.json* β”‚ └── *entity_val2017_long.json* β”œβ”€β”€ panoptic_semseg_train2017/ β”œβ”€β”€ panoptic_semseg_val2017/ β”œβ”€β”€ panoptic_train2017/ β”œβ”€β”€ panoptic_val2017/ β”œβ”€β”€ train2017/ └── *val2017/* </pre> </details>

Run Demo

<details open> <summary>Command</summary> <pre> python3 -m demo.find.demo_interleave_llama evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ MODEL.DECODER.HIDDEN_DIM 512 \ MODEL.ENCODER.CONVS_DIM 512 \ MODEL.ENCODER.MASK_DIM 512 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 480 \ VLP.INPUT.MAX_SIZE_TEST 640 \ VLP.TEST.BATCH_SIZE_TOTAL 8 \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt \ FP16 True \ FAKE_UPDATE True </pre> </details>

Run Evaluation

<details open> <summary>Single-GPU</summary> <pre> python entry.py evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ FP16 True \ MODEL.DECODER.MASK.ENABLED True \ MODEL.DECODER.CAPTION.ENABLED True \ MODEL.DECODER.SPATIAL.ENABLED True \ MODEL.DECODER.RETRIEVAL.ENABLED True \ MODEL.DECODER.GROUNDING.ENABLED True \ MODEL.DECODER.INTERLEAVE.ENABLED True \ MODEL.DECODER.INTERLEAVE.VISUAL_PROB 0.5 \ COCO.TRAIN.BATCH_SIZE_TOTAL 1 \ COCO.TRAIN.BATCH_SIZE_PER_GPU 1 \ COCO.TEST.BATCH_SIZE_TOTAL 1 \ FP16 True \ REF.TEST.BATCH_SIZE_TOTAL 1 \ VLP.TEST.BATCH_SIZE_TOTAL 1 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 512 \ VLP.INPUT.MAX_SIZE_TEST 720 \ COCO.INPUT.MIN_SIZE_TEST 640 \ COCO.INPUT.MAX_SIZE_TEST 1024 \ WEIGHT True \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt </pre> </details> <details close> <summary>Multi-GPU</summary> <pre> CUDA_VISIBLE_DEVICES=4,5,6,7 mpirun -n 4 python entry.py evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ FP16 True \ MODEL.DECODER.MASK.ENABLED True \ MODEL.DECODER.CAPTION.ENABLED True \ MODEL.DECODER.SPATIAL.ENABLED True \ MODEL.DECODER.RETRIEVAL.ENABLED True \ MODEL.DECODER.GROUNDING.ENABLED True \ MODEL.DECODER.INTERLEAVE.ENABLED True \ MODEL.DECODER.INTERLEAVE.VISUAL_PROB 0.5 \ COCO.TRAIN.BATCH_SIZE_TOTAL 1 \ COCO.TRAIN.BATCH_SIZE_PER_GPU 1 \ COCO.TEST.BATCH_SIZE_TOTAL 4 \ FP16 True \ REF.TEST.BATCH_SIZE_TOTAL 4 \ VLP.TEST.BATCH_SIZE_TOTAL 4 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 512 \ VLP.INPUT.MAX_SIZE_TEST 720 \ COCO.INPUT.MIN_SIZE_TEST 640 \ COCO.INPUT.MAX_SIZE_TEST 1024 \ WEIGHT True \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt </pre> </details>

Run Training

β›³ Interleave Checkpoint

COCO-EntityCOCO-Entity-Long
cIoUAP50IR@5IR@10cIoUAP50IR@5IR@10
ImageBIND (H)---51.461.3--58.768.9
Grounding-SAM (H)-58.963.2--56.162.5--
Focal-Tckpt74.979.543.557.173.277.749.463.9
Focal-Lckpt76.281.381.188.774.879.389.394.6

:framed_picture: FIND-Bench Visualization

<img width="400" alt="Screenshot 2024-08-05 at 3 50 54β€―PM" src="https://github.com/user-attachments/assets/541d5761-88f9-4797-ba07-66effcdd3e45"> <img width="400" alt="Screenshot 2024-08-05 at 3 50 46β€―PM" src="https://github.com/user-attachments/assets/dfece581-578a-4b41-9c18-d957f5868dcb">

πŸ”— Citation

If you find this repo useful for your research and applications, please cite using this BibTeX:

@misc{zou2022xdecoder,
      title={Generalized decoding for pixel, image, and language}, 
      author={Zou*, Xueyan and Dou*, Zi-Yi and Yang*, Jianwei and Gan, Zhe and Li, Linjie and Li, Chunyuan and Dai, Xiyang and Behl, Harkirat and Wang, Jianfeng and Yuan, Lu and Peng, Nanyun and Wang, Lijuan and Lee†, Yong Jae and Gao†, Jianfeng},
      publisher={CVPR},
      year={2023},
}

@misc{zou2023seem,
      title={Segment everything everywhere all at once}, 
      author={Zou*, Xueyan and Yang*, Jianwei and Zhang*, Hao and Li*, Feng and Li, Linjie and Wang, Jianfeng and Wang, Lijuan and Gao†, Jianfeng and Lee†, Yong Jae},
      publisher={NeurIPS},
      year={2023},
}

@misc{zou2024find,
      title={Interfacing Foundation Models' Embeddings}, 
      author={Zou, Xueyan and Li, Linjie and Wang, Jianfeng and Yang, Jianwei and Ding, Mingyu and Yang, Zhengyuan and Li, Feng and Zhang, Hao and Liu, Shilong and Aravinthan, Arul and Lee†, Yong Jae and Wang†, Lijuan},
      publisher={arXiv preprint arXiv:2312.07532},
      year={2024},
}

πŸ“š Acknowledgement

This research project has benefitted from the Microsoft Accelerate Foundation Models Research (AFMR) grant program.