Awesome
:sauropod: Grounding DINO
Official PyTorch implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection": the SoTA open-set object detector.
:sun_with_face: Helpful Tutorial
- :grapes: [Read our arXiv Paper]
- :apple: [Watch our simple introduction video on YouTube]
- :rose: [Try the Colab Demo]
- :sunflower: [Try our Official Huggingface Demo]
- :maple_leaf: [Watch the Step by Step Tutorial about GroundingDINO by Roboflow AI]
- :mushroom: [GroundingDINO: Automated Dataset Annotation and Evaluation by Roboflow AI]
- :hibiscus: [Accelerate Image Annotation with SAM and GroundingDINO by Roboflow AI]
:sparkles: Highlight Projects
- DetGPT: Detect What You Need via Reasoning
- Grounded-SAM: Marrying Grounding DINO with Segment Anything
- Grounding DINO with Stable Diffusion
- Grounding DINO with GLIGEN for Controllable Image Editing
- OpenSeeD: A Simple and Strong Openset Segmentation Model
- SEEM: Segment Everything Everywhere All at Once
- X-GPT: Conversational Visual Agent supported by X-Decoder
- GLIGEN: Open-Set Grounded Text-to-Image Generation
- LLaVA: Large Language and Vision Assistant
:bulb: Highlight
- Open-Set Detection. Detect everything with language!
- High Performancce. COCO zero-shot 52.5 AP (training without COCO data!). COCO fine-tune 63.0 AP.
- Flexible. Collaboration with Stable Diffusion for Image Editting.
:fire: News
2023/04/15
: Refer to CV in the Wild Readings for those who are interested in open-set recognition!2023/04/08
: We release demos to combine Grounding DINO with GLIGEN for more controllable image editings.2023/04/08
: We release demos to combine Grounding DINO with Stable Diffusion for image editings.2023/04/06
: We build a new demo by marrying GroundingDINO with Segment-Anything named Grounded-Segment-Anything aims to support segmentation in GroundingDINO.2023/03/28
: A YouTube video about Grounding DINO and basic object detection prompt engineering. [SkalskiP]2023/03/28
: Add a demo on Hugging Face Space!2023/03/27
: Support CPU-only mode. Now the model can run on machines without GPUs.2023/03/25
: A demo for Grounding DINO is available at Colab. [SkalskiP]2023/03/22
: Code is available Now!
:star: Explanations/Tips for Grounding DINO Inputs and Outputs
- Grounding DINO accepts an
(image, text)
pair as inputs. - It outputs
900
(by default) object boxes. Each box has similarity scores across all input words. (as shown in Figures below.) - We defaultly choose the boxes whose highest similarities are higher than a
box_threshold
. - We extract the words whose similarities are higher than the
text_threshold
as predicted labels. - If you want to obtain objects of specific phrases, like the
dogs
in the sentencetwo dogs with a stick.
, you can select the boxes with highest text similarities withdogs
as final outputs. - Note that each word can be split to more than one tokens with different tokenlizers. The number of words in a sentence may not equal to the number of text tokens.
- We suggest separating different category names with
.
for Grounding DINO.
:label: TODO
- Release inference code and demo.
- Release checkpoints.
- Grounding DINO with Stable Diffusion and GLIGEN demos.
- Release training codes.
:hammer_and_wrench: Install
Note:
If you have a CUDA environment, please make sure the environment variable CUDA_HOME
is set. It will be compiled under CPU-only mode if no CUDA available.
Installation:
Clone the GroundingDINO repository from GitHub.
git clone https://github.com/IDEA-Research/GroundingDINO.git
Change the current directory to the GroundingDINO folder.
cd GroundingDINO/
Install the required dependencies in the current directory.
pip3 install -q -e .
Create a new directory called "weights" to store the model weights.
mkdir weights
Change the current directory to the "weights" folder.
cd weights
Download the model weights file.
wget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
:arrow_forward: Demo
Check your GPU ID (only if you're using a GPU)
nvidia-smi
Replace {GPU ID}
, image_you_want_to_detect.jpg
, and "dir you want to save the output"
with appropriate values in the following command
CUDA_VISIBLE_DEVICES={GPU ID} python demo/inference_on_a_image.py \
-c /GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
-p /GroundingDINO/weights/groundingdino_swint_ogc.pth \
-i image_you_want_to_detect.jpg \
-o "dir you want to save the output" \
-t "chair"
[--cpu-only] # open it for cpu mode
See the demo/inference_on_a_image.py
for more details.
Running with Python:
from groundingdino.util.inference import load_model, load_image, predict, annotate
import cv2
model = load_model("groundingdino/config/GroundingDINO_SwinT_OGC.py", "weights/groundingdino_swint_ogc.pth")
IMAGE_PATH = "weights/dog-3.jpeg"
TEXT_PROMPT = "chair . person . dog ."
BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25
image_source, image = load_image(IMAGE_PATH)
boxes, logits, phrases = predict(
model=model,
image=image,
caption=TEXT_PROMPT,
box_threshold=BOX_TRESHOLD,
text_threshold=TEXT_TRESHOLD
)
annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)
cv2.imwrite("annotated_image.jpg", annotated_frame)
Web UI
We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file demo/gradio_app.py
for more details.
Notebooks
- We release demos to combine Grounding DINO with GLIGEN for more controllable image editings.
- We release demos to combine Grounding DINO with Stable Diffusion for image editings.
:luggage: Checkpoints
<!-- insert a table --> <table> <thead> <tr style="text-align: right;"> <th></th> <th>name</th> <th>backbone</th> <th>Data</th> <th>box AP on COCO</th> <th>Checkpoint</th> <th>Config</th> </tr> </thead> <tbody> <tr> <th>1</th> <td>GroundingDINO-T</td> <td>Swin-T</td> <td>O365,GoldG,Cap4M</td> <td>48.4 (zero-shot) / 57.2 (fine-tune)</td> <td><a href="https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth">GitHub link</a> | <a href="https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swint_ogc.pth">HF link</a></td> <td><a href="https://github.com/IDEA-Research/GroundingDINO/blob/main/groundingdino/config/GroundingDINO_SwinT_OGC.py">link</a></td> </tr> <tr> <th>2</th> <td>GroundingDINO-B</td> <td>Swin-B</td> <td>COCO,O365,GoldG,Cap4M,OpenImage,ODinW-35,RefCOCO</td> <td>56.7 </td> <td><a href="https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha2/groundingdino_swinb_cogcoor.pth">GitHub link</a> | <a href="https://huggingface.co/ShilongLiu/GroundingDINO/resolve/main/groundingdino_swinb_cogcoor.pth">HF link</a> <td><a href="https://github.com/IDEA-Research/GroundingDINO/blob/main/groundingdino/config/GroundingDINO_SwinB.cfg.py">link</a></td> </tr> </tbody> </table>:medal_military: Results
<details open> <summary><font size="4"> COCO Object Detection Results </font></summary> <img src=".asset/COCO.png" alt="COCO" width="100%"> </details> <details open> <summary><font size="4"> ODinW Object Detection Results </font></summary> <img src=".asset/ODinW.png" alt="ODinW" width="100%"> </details> <details open> <summary><font size="4"> Marrying Grounding DINO with <a href="https://github.com/Stability-AI/StableDiffusion">Stable Diffusion</a> for Image Editing </font></summary> See our example <a href="https://github.com/IDEA-Research/GroundingDINO/blob/main/demo/image_editing_with_groundingdino_stablediffusion.ipynb">notebook</a> for more details. <img src=".asset/GD_SD.png" alt="GD_SD" width="100%"> </details> <details open> <summary><font size="4"> Marrying Grounding DINO with <a href="https://github.com/gligen/GLIGEN">GLIGEN</a> for more Detailed Image Editing. </font></summary> See our example <a href="https://github.com/IDEA-Research/GroundingDINO/blob/main/demo/image_editing_with_groundingdino_gligen.ipynb">notebook</a> for more details. <img src=".asset/GD_GLIGEN.png" alt="GD_GLIGEN" width="100%"> </details>:sauropod: Model: Grounding DINO
Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder.
:hearts: Acknowledgement
Our model is related to DINO and GLIP. Thanks for their great work!
We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at Awesome Detection Transformer. A new toolbox detrex is available as well.
Thanks Stable Diffusion and GLIGEN for their awesome models.
:black_nib: Citation
If you find our work helpful for your research, please consider citing the following BibTeX entry.
@article{liu2023grounding,
title={Grounding dino: Marrying dino with grounded pre-training for open-set object detection},
author={Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Hao and Yang, Jie and Li, Chunyuan and Yang, Jianwei and Su, Hang and Zhu, Jun and others},
journal={arXiv preprint arXiv:2303.05499},
year={2023}
}