Home

Awesome

<img src="media/HYDRA_icon_minimal.png" width="20"> HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning

<div align="center"> <img src="media/Frame.png"> <p></p> </div> <div align="center"> <a href="https://github.com/ControlNet/HYDRA/issues"> <img src="https://img.shields.io/github/issues/ControlNet/HYDRA?style=flat-square"> </a> <a href="https://github.com/ControlNet/HYDRA/network/members"> <img src="https://img.shields.io/github/forks/ControlNet/HYDRA?style=flat-square"> </a> <a href="https://github.com/ControlNet/HYDRA/stargazers"> <img src="https://img.shields.io/github/stars/ControlNet/HYDRA?style=flat-square"> </a> <a href="https://github.com/ControlNet/HYDRA/blob/master/LICENSE"> <img src="https://img.shields.io/github/license/ControlNet/HYDRA?style=flat-square"> </a> <a href="https://arxiv.org/abs/2403.12884"> <img src="https://img.shields.io/badge/arXiv-2403.12884-b31b1b.svg?style=flat-square"> </a> </div> <div align="center"> <a href="https://pypi.org/project/hydra-vl4ai/"> <img src="https://img.shields.io/pypi/v/hydra-vl4ai?style=flat-square"> </a> <a href="https://pypi.org/project/hydra-vl4ai/"> <img src="https://img.shields.io/pypi/dm/hydra-vl4ai?style=flat-square"> </a> <a href="https://www.python.org/"><img src="https://img.shields.io/pypi/pyversions/hydra-vl4ai?style=flat-square"></a> </div>

This is the code for the paper HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning, accepted by ECCV 2024 [Project Page].

Release

TODOs

We realize that gpt-3.5-turbo-0613 is deprecated, and gpt-3.5 will be replaced by gpt-4o-mini. We will release another version of HYDRA.

As of July 2024, gpt-4o-mini should be used in place of gpt-3.5-turbo, as it is cheaper, more capable, multimodal, and just as fast Openai API Page.

We also notice the embedding model is updated by OpenAI as shown in this link. Due to the uncertainty of the embedding model updates from OpenAI, we suggest you train a new version of the RL controller yourself and update the RL models.

Installation

Requirements

Please follow the instructions below to install the required packages and set up the environment.

1. Clone this repository.

git clone https://github.com/ControlNet/HYDRA

2. Setup conda environment and install dependencies.

Option 1: Using pixi (recommended):

pixi run install
pixi shell

Option 2: Building from source:

bash -i build_env.sh

If you meet errors, please consider going through the build_env.sh file and install the packages manually.

3. Configure the environments

Edit the file .env or setup in CLI to configure the environment variables.

OPENAI_API_KEY=your-api-key
OLLAMA_HOST=http://ollama.server:11434
# do not change this TORCH_HOME variable
TORCH_HOME=./pretrained_models

4. Download the pretrained models

Run the scripts to download the pretrained models to the ./pretrained_models directory.

python -m hydra_vl4ai.download_model --base_config <EXP-CONFIG-DIR> --model_config <MODEL-CONFIG-PATH>

For example,

python -m hydra_vl4ai.download_model --base_config ./config/okvqa.yaml --model_config ./config/model_config_1gpu.yaml

Inference

A worker is required to run the inference.

python -m hydra_vl4ai.executor --base_config <EXP-CONFIG-DIR> --model_config <MODEL-CONFIG-PATH>

Inference with given one image and prompt

python demo_cli.py \
  --image <IMAGE_PATH> \
  --prompt <PROMPT> \
  --base_config <YOUR-CONFIG-DIR> \
  --model_config <MODEL-PATH>

Inference with Gradio GUI

python demo_gradio.py \
  --base_config <YOUR-CONFIG-DIR> \
  --model_config <MODEL-PATH>

Inference dataset

python main.py \
  --data_root <YOUR-DATA-ROOT> \
  --base_config <YOUR-CONFIG-DIR> \
  --model_config <MODEL-PATH>

Then the inference results are saved in the ./result directory for evaluation.

Evaluation

python evaluate.py <RESULT_JSON_PATH> <DATASET_NAME>

For example,

python evaluate.py result/result_okvqa.jsonl okvqa

Citation

@inproceedings{ke2024hydra,
  title={HYDRA: A Hyper Agent for Dynamic Compositional Visual Reasoning},
  author={Ke, Fucai and Cai, Zhixi and Jahangard, Simindokht and Wang, Weiqing and Haghighi, Pari Delir and Rezatofighi, Hamid},
  booktitle={European Conference on Computer Vision},
  year={2024},
  organization={Springer},
  doi={10.1007/978-3-031-72661-3_8},
  isbn={978-3-031-72661-3},
  pages={132--149},
}

Acknowledgements

Some code and prompts are based on cvlab-columbia/viper.