Awesome
foundation-vision-benchmark
A qualitative set of tests for use in evaluating the capabilities of foundation vision models.
๐ Abstract
In 2023, multimodal vision models have become substantially more powerful; the year has been coined as the "year of multimodality." Roboflow actively explores new vision models and their capabilities, conducting qualitative tests to learn more about how different models perform.
Our tests help us answer questions like "will this model help with labeling data?" and "could this model be used in a two-stage detection system (i.e. OCR)?"
This repository contains several tests we have run on foundation vision models to understand their performance. If you are curious about exploring new vision models, this repository may give you direction, ideas, and result in novel learnings about the model with which you are working.
โ Tests
Task Type | Prompt | Image |
---|---|---|
Classification | What is in the image? Return the class of the object in the image. Here are the classes: Tesla Model 3, Toyota Camry. You can only return one class from that list. | |
Classification | What is in the image? Return the class of the object in the image. Here are the classes: pizza, deep dish pizza. You can only return one class from that list. | |
Optical Character Recognition (Documents) | Read the text in the image. | |
Object Detection | Detect a dog on the image. Provide me with x_min_y_min, x_max and y_max coordinates. |
๐งช Test Surface Area
We test:
- Object detection
- Classification
- Document OCR
- Handwriting OCR
- Math OCR
See the sections below for reference images and prompts.
General Recommendations
We also recommend testing visual question answering (asking general questions about an image), exploring areas such as:
- Spatial awareness: Can the model answer questions about how objects relate?
- Veracity: When asked a question about an image, is the model correct?
- Comprehensiveness: Does a model provide a comprehensive answer to your question? Are details missed?
- Consistently: Does a model consistently answer questions it can answer? Does a model say it cannot help with a question with which it has previously helped?
โ ๏ธ Limitations
Given the vast array of possibilities with vision models, no set of tests, this one included, can comprehensively evaluate what a model can do; this repository is a starting point for exploration.
๐ฆธ Contribute
We would love your help in making this repository even better! Whether you want to add a new experiment or have any suggestions for improvement, feel free to open an issue or pull request.