Awesome
π FIND: Interfacing Foundation Models' Embeddings
:grapes: [Read our arXiv Paper] Β :apple: [Try our Demo] Β :orange: [Walk through Project Page]
We introduce FIND that can INterfacing Foundation models' embeDDings in an interleaved shared embedding space. Below is a brief introduction to the generic and interleave tasks we can do!
by Xueyan Zou, Linjie Li, Jianfeng Wang, Jianwei Yang, Mingyu Ding, Junyi Wei, Zhengyuan Yang, Feng Li, Hao Zhang, Shilong Liu, Arul Aravinthan, Yong Jae Lee*, Lijuan Wang*,
** Equal Advising **
:rocket: Updates
- [2024.8.20] We have released an updated version arXiv with comprehensive user guide on GitHub!
- [2023.12.3] We have a poster session @ NeurIPS24 for SEEM, feel free to visit us during 5:00-7:00pm (CT)!
- [2023.12.2] We have released all the training, evaluation, and demo code!
:bookmark_tabs: Catalog
- Demo Code
- Model Checkpoint
- Comprehensive User Guide
- Dataset
- Training Code
- Evaluation Code
:hammer: Getting Started
<details open> <summary>Install Conda</summary> <pre> sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh bash ~/miniconda.sh -b -p $HOME/miniconda eval "$($HOME/miniconda/bin/conda shell.bash hook)" conda init conda init zsh </pre> </details>Build Environment
conda create --name find python=3.10
conda activate find
conda install -c conda-forge mpi4py
conda install -c conda-forge cudatoolkit=11.7
conda install -c nvidia/label/cuda-11.7.0 cuda-toolkit
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2+cu117 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r assets/requirements/requirements.txt
pip install -r assets/requirements/requirements_custom.txt
cd modeling/vision/encoder/ops
sh make.sh
cd ../../..
Build Dataset
Explore through π€ Hugging Face: FIND-Bench.
Download Raw File:
entity_train2017.json | entity_val2017.json | entity_val2017_long.json |
---|---|---|
download | download | download |
-
To run the demo, files/folders within * * are required, please download COCO dataset and FIND-Bench annotations entity_val2017.json and entity_val2017_long.json.
-
To run evaluation, please additionally prepare panoptic_val2017 according to Mask2Former.
-
To run training, please additionally download and prepare all other files.
Run Demo
<details open> <summary>Command</summary> <pre> python3 -m demo.find.demo_interleave_llama evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ MODEL.DECODER.HIDDEN_DIM 512 \ MODEL.ENCODER.CONVS_DIM 512 \ MODEL.ENCODER.MASK_DIM 512 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 480 \ VLP.INPUT.MAX_SIZE_TEST 640 \ VLP.TEST.BATCH_SIZE_TOTAL 8 \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt \ FP16 True \ FAKE_UPDATE True </pre> </details>Run Evaluation
<details open> <summary>Single-GPU</summary> <pre> python entry.py evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ FP16 True \ MODEL.DECODER.MASK.ENABLED True \ MODEL.DECODER.CAPTION.ENABLED True \ MODEL.DECODER.SPATIAL.ENABLED True \ MODEL.DECODER.RETRIEVAL.ENABLED True \ MODEL.DECODER.GROUNDING.ENABLED True \ MODEL.DECODER.INTERLEAVE.ENABLED True \ MODEL.DECODER.INTERLEAVE.VISUAL_PROB 0.5 \ COCO.TRAIN.BATCH_SIZE_TOTAL 1 \ COCO.TRAIN.BATCH_SIZE_PER_GPU 1 \ COCO.TEST.BATCH_SIZE_TOTAL 1 \ FP16 True \ REF.TEST.BATCH_SIZE_TOTAL 1 \ VLP.TEST.BATCH_SIZE_TOTAL 1 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 512 \ VLP.INPUT.MAX_SIZE_TEST 720 \ COCO.INPUT.MIN_SIZE_TEST 640 \ COCO.INPUT.MAX_SIZE_TEST 1024 \ WEIGHT True \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt </pre> </details> <details close> <summary>Multi-GPU</summary> <pre> CUDA_VISIBLE_DEVICES=4,5,6,7 mpirun -n 4 python entry.py evaluate \ --conf_files configs/find/focall_llama_lang.yaml \ --overrides \ FP16 True \ MODEL.DECODER.MASK.ENABLED True \ MODEL.DECODER.CAPTION.ENABLED True \ MODEL.DECODER.SPATIAL.ENABLED True \ MODEL.DECODER.RETRIEVAL.ENABLED True \ MODEL.DECODER.GROUNDING.ENABLED True \ MODEL.DECODER.INTERLEAVE.ENABLED True \ MODEL.DECODER.INTERLEAVE.VISUAL_PROB 0.5 \ COCO.TRAIN.BATCH_SIZE_TOTAL 1 \ COCO.TRAIN.BATCH_SIZE_PER_GPU 1 \ COCO.TEST.BATCH_SIZE_TOTAL 4 \ FP16 True \ REF.TEST.BATCH_SIZE_TOTAL 4 \ VLP.TEST.BATCH_SIZE_TOTAL 4 \ VLP.INPUT.SHORTEST_EDGE True \ VLP.INPUT.MIN_SIZE_TEST 512 \ VLP.INPUT.MAX_SIZE_TEST 720 \ COCO.INPUT.MIN_SIZE_TEST 640 \ COCO.INPUT.MAX_SIZE_TEST 1024 \ WEIGHT True \ RESUME_FROM /pth/to/grin_focall_llama_x640.pt </pre> </details>Run Training
β³ Interleave Checkpoint
COCO-Entity | COCO-Entity-Long | ||||||||
---|---|---|---|---|---|---|---|---|---|
cIoU | AP50 | IR@5 | IR@10 | cIoU | AP50 | IR@5 | IR@10 | ||
ImageBIND (H) | - | - | - | 51.4 | 61.3 | - | - | 58.7 | 68.9 |
Grounding-SAM (H) | - | 58.9 | 63.2 | - | - | 56.1 | 62.5 | - | - |
Focal-T | ckpt | 74.9 | 79.5 | 43.5 | 57.1 | 73.2 | 77.7 | 49.4 | 63.9 |
Focal-L | ckpt | 76.2 | 81.3 | 81.1 | 88.7 | 74.8 | 79.3 | 89.3 | 94.6 |
:framed_picture: FIND-Bench Visualization
<img width="400" alt="Screenshot 2024-08-05 at 3 50 54β―PM" src="https://github.com/user-attachments/assets/541d5761-88f9-4797-ba07-66effcdd3e45"> <img width="400" alt="Screenshot 2024-08-05 at 3 50 46β―PM" src="https://github.com/user-attachments/assets/dfece581-578a-4b41-9c18-d957f5868dcb">π Citation
If you find this repo useful for your research and applications, please cite using this BibTeX:
@misc{zou2022xdecoder,
title={Generalized decoding for pixel, image, and language},
author={Zou*, Xueyan and Dou*, Zi-Yi and Yang*, Jianwei and Gan, Zhe and Li, Linjie and Li, Chunyuan and Dai, Xiyang and Behl, Harkirat and Wang, Jianfeng and Yuan, Lu and Peng, Nanyun and Wang, Lijuan and Leeβ , Yong Jae and Gaoβ , Jianfeng},
publisher={CVPR},
year={2023},
}
@misc{zou2023seem,
title={Segment everything everywhere all at once},
author={Zou*, Xueyan and Yang*, Jianwei and Zhang*, Hao and Li*, Feng and Li, Linjie and Wang, Jianfeng and Wang, Lijuan and Gaoβ , Jianfeng and Leeβ , Yong Jae},
publisher={NeurIPS},
year={2023},
}
@misc{zou2024find,
title={Interfacing Foundation Models' Embeddings},
author={Zou, Xueyan and Li, Linjie and Wang, Jianfeng and Yang, Jianwei and Ding, Mingyu and Yang, Zhengyuan and Li, Feng and Zhang, Hao and Liu, Shilong and Aravinthan, Arul and Leeβ , Yong Jae and Wangβ , Lijuan},
publisher={arXiv preprint arXiv:2312.07532},
year={2024},
}
π Acknowledgement
This research project has benefitted from the Microsoft Accelerate Foundation Models Research (AFMR) grant program.