Home

Awesome

Vibe-Eval

main

A benchmark for evaluating multimodal chat models, including especially challenging examples.

[Link to paper] [Blogpost] [🤗 Dataset]

Example from the dataset

Dataset

The dataset including all images can be downloaded in the Releases page of this repo.

The dataset is stored as a JSONL file: data/vibe-eval.v1.jsonl. Each example has the following fields:

Leaderboard 🏆

Vibe-Eval Score (%)

Modelallhardnormal
Gemini Flash 2.067.152.375.9
Claude 3.5 Sonnet66.054.073.1
GPT-4o64.752.372.0
Gemini-1.5 Pro63.852.370.6
GPT-4o-mini56.744.763.8
Reka Flash56.039.3†65.8
Pixtral Large55.143.062.3
Grok Vision Beta54.237.164.2
Gemini 1.5 Flash 8b54.144.859.6
Claude Opus52.841.859.2
Pixtral 12b52.539.360.4
Claude Haiku48.531.658.2

† Note we expect the results of Reka models to be worse on the hard-set, as these are, by their very definition, prompts that Core cannot solve.

Running the evaluation

To run the evaluation, use evaluate.py as follows:

python evaluate.py generations.jsonl -o out.jsonl

(you will have to install a couple of requirements, including the Reka API package with pip install -r requirements.txt)

The generations.jsonl is expected to contain model generations. It should be a JSONL file where each line is a JSON object with keys "generation" and "example_id" (matching the dataset).

This will output detailed results to out.jsonl and will also print a table of final results to stdout.

Running the generations

We provide model generation script that covers the following models: Claude, Gemini, OpenAI, Reka, xAI and Pixtral models. Just run e.g. python models/generate.py --model MODEL_NAME. Make sure you have necessary requirements for that model installed and API keys set, written at the top of each script model definition script. These will save the generations to a .jsonl. in data/generations folder.

Set API keys via cli flag --api_key API_KEY, bash variables, or manually in a .env file:

REKA_API_KEY=your_api_key
OPENAI_API_KEY=your_api_key
GEMINI_API_KEY=your_api_key
ANTHROPIC_API_KEY=your_api_key
XAI_API_KEY=your_api_key

**Note, some image sizes exceeed anthropic's API limit of 5MB, therefore we upload these to chat manually and add them to the generations jsonl

Visualizing the benchmark and generations

To visualize the benchmark and generations just open visualizer/index.html locally in your browser. Upload the benchmark and results files from the evaulate.py: Visualizer

Citation

@article{padlewski2024vibeeval,
  title={Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models},
  author={Piotr Padlewski and Max Bain and Matthew Henderson and Zhongkai Zhu and Nishant Relan and Hai Pham and Donovan Ong and Kaloyan Aleksiev and Aitor Ormazabal and Samuel Phua and Ethan Yeo and Eugenie Lamprecht and Qi Liu and Yuqi Wang and Eric Chen and Deyu Fu and Lei Li and Che Zheng and Cyprien de Masson d'Autume and Dani Yogatama and Mikel Artetxe and Yi Tay},
  journal={arXiv preprint arXiv:2405.02287},
  year={2024}
}