Awesome
<p align="center"> <h4 align="center"><font color="#966661">InstructDet</font>: Diversifying Referring Object Detection with Generalized Instructions</h4> <p align="center"><img src="./assets/teaser.png" alt="teaser" width="600px" /></p> </p><font color="#966661">InstructDET</font>, a data-centric method for referring object detection(ROD) that localizes target objects based on user instructions.
Our ROD aims to execute diversified user detection instructions compared to visual grounding. For images with object bbxs, we use foundation models to produce human-like object detection instructions. By training a conventional ROD model with incorporating tremendous instructions, we largely push ROD towards practical usage from a data-centric perspective.
Release
- [2024/01/30] The InDET dataset and instruction generation code are released.
- [2024/01/16] Our InstructDET paper has been accepted by ICLR 2024.
Demo Video
Examples
Our diversified referring object detection(DROD) model, visual comparison with UNINEXT and Grounding-DINO. <img src="./assets/DROD_comparison_1.png" alt="DROD_1" style="zoom: 25%;" /> <img src="./assets/DROD_comparison_2.png" alt="DROD_2" style="zoom: 25%;" />
Download InDET
The annotations of our InDET dataset are in refcoco format and you can download from Google Drive or Baidu Pan. The images in our InDET dataset contains images from RefCOCO/g/+ (RefCOCO/g/+ images come from MSCOCO), Flicker30K entities, and Objects365 v2 (we sample 6000 images). Please follow their instructions to download the images and put them under a base data directory in the following structure.
├─ indet_images
├─ coco
│ └─ train2014
├─ flickr30k_entities
└─ objects365v2
├─ train
├─ val
└─ test
Instruction Diversify Inference
Using code in this repository, one can reproduce instructions similar to those in our InDET dataset. Our method do not modify image or bboxes, we expand the text instructions only. The initial input should be images and bboxes of interested objects. Inferencing with multiple gpus is not supported for now, you can spliting the dataset into batches by specify startidx and stride for generation efficiency.
Install
Environment requirements: python==3.8, CUDA==11.6, torch==1.13.1.
# git clone our repository
git clone https://github.com/jyFengGoGo/InstructDet.git
cd InstructDet
# build and install module dependencies
cd modules/fastchat
pip install -e .
cd ../llava
pip install -e .
cd ../instruct_filter/CLIP
pip install -e .
pip install -r requirements.txt
Foundation Model Weights
We use foundation models to generate human-like object detection instructions, links to download the model weights are listed here. NOTE: If your machine can access network, then only llava/vicuna/minigpt4 linear-layer weights need to be manually downloaded while the others will be automatically downloaded.
process | model | download |
---|---|---|
global prompt | LLAVA & Vicuna | llava-v1.5-13b, vicuna-13b-v1.3 |
local prompt | MiniGPT4 | vicuna-13b-v0-merged, eva_clip_g, T5, linear layer, BERT |
instruction filtering | CLIP | ViT-B/32 |
instruction grouping | Vicuna | - |
When the model weights are ready, replace corresponding model paths in the following configs.
configs/instructdet.yaml
modules/minigpt4/minigpt4/configs/models/minigpt4_local.yaml # if your machine can access network, use minigpt4.yaml
modules/minigpt4/eval_configs/minigpt4_local_eval.yaml # if your machine can access network, use minigpt4_eval.yaml
Prepare Dataset
Download images and annotations of RefCOCO/g/+ (RefCOCO/g/+ images come from MSCOCO), Flicker30K entities, and Objects365 v2 (we sample 6000 images).
Run
- Data Pre-Process Format transferring: transfer refcoco/coco/flickr30k_entities format into jsonline format, please refer to refcoco2llavainput.py/o3652llavainput.py/flickr30k2jsonlines.py.
- Instruction generation, please refer to instructdet output format for output details.
# bash scripts/run.sh {startidx} {stride}
bash scripts/run.sh 0 100
- Post-Process Format transferring: transfer jsonline format into refcoco format, please refer to tools/format_tools/jsonline2refcoco.py
Cite
@article{dang2023instructdet,
title={InstructDET: Diversifying Referring Object Detection with Generalized Instructions.},
author={Dang, Ronghao and Feng, Jiangyan and Zhang, Haodong and Ge, Chongjian and Song, Lin and Gong, Lijun and Liu, Chengju and Chen, Qijun and Zhu, Feng and Zhao, Rui and Song, Yibin},
journal={arXiv preprint arXiv:2310.05136},
year={2023}
}
Acknowledgement & License
This project makes use of LLaVA, MiniGPT-4, FastChat and CLIP. See the related subfolders for copyright and licensing details: LLaVA, MiniGPT-4, FastChat, CLIP. Thanks for their wonderful works.
For images from COCO, Flickr30k and Objects365, please see and follow their terms of use: MSCOCO), Flicker30K entities, Objects365 v2.