Awesome
<br /> <p align="center"> <h3 align="center"><strong>Equivariant Similarity for Vision-Language Foundation Models</strong></h3> <p align="center"> <a href="https://scholar.google.com/citations?hl=en&user=wFduC9EAAAAJ" target='_blank'>Tan Wang</a>, <a href="https://scholar.google.com/citations?hl=en&user=LKSy1kwAAAAJ" target='_blank'>Kevin Lin</a>, <a href="https://scholar.google.com/citations?hl=en&user=WR875gYAAAAJ" target='_blank'>Linjie Li</a>, <a href="https://scholar.google.com/citations?hl=en&user=legkbM0AAAAJ" target='_blank'>Chung-Ching Lin</a>, <a href="https://scholar.google.com/citations?hl=en&user=rP02ve8AAAAJ" target='_blank'>Zhengyuan Yang</a>, <a href="https://scholar.google.com/citations?hl=en&user=YG0DFyYAAAAJ" target='_blank'>Hanwang Zhang</a>, <a href="https://scholar.google.com/citations?hl=en&user=bkALdvsAAAAJ" target='_blank'>Zicheng Liu</a>, <a href="https://scholar.google.com/citations?hl=en&user=cDcWXuIAAAAJ" target='_blank'>Lijuan Wang</a> <br> Nanyang Technological University, Microsoft Corporation </p> </p> <p align="center"> <a href="https://arxiv.org/abs/2303.14465" target='_blank'> <img src="https://img.shields.io/badge/Paper-%F0%9F%93%83-blue"> </a> <a href="https://codalab.lisn.upsaclay.fr/competitions/10266" target='_blank'> <img src="https://img.shields.io/badge/CodaLab-%F0%9F%A7%AA-yellow"> </a> <a href="https://entuedu-my.sharepoint.com/:b:/g/personal/tan317_e_ntu_edu_sg/EUeL5VNXA2ZIng1zd-zH_ccBLBvcVpc5dqysriFtseR33A?e=BhhQJk" target='_blank'> <img src="https://img.shields.io/badge/Slides-š-red"> </a> </p> <p align="center"> <img src="figs/eqben.png" align="center" width="100%"> Our proposed EqBen is the first benchmark to focus on "visual-minimal change" to diagnose the Vision-Language foundation models. </p> <br>News
-
Release a light EqBen data (~100G -> ~23G) by converting the png image format to jpg.
-
Randomly check one sample of EqBen with the latest LLaVA-1.5, still stuggle!
<p align="center"> <img src="figs/eqben_llava15.jpg" align="center" width="100%"> Latest LLaVA-1.5 response for a random sample of the proposed EqBen which is still wrong. </p> -
Due to the unstablity of the CodaLab platform, we decide to publish the whole annotation of EqBen. Please check here for more details.
-
Our Paper has been accepted by ICCV 2023 (Oral) !
-
We perform some toy experiments for recent popular Multimodal Large Language Model (MLLM) on EqBen and observe much inferior results of MiniGPT4 (see below). We also release a small EqBen subset to quantitatively measure the performances of MLLM. See the qualitative and quantitative results below or just check our new paper version on ArXiv.
<p align="center"> <img src="figs/minigpt4.png" align="center" width="100%"> MiniGPT4 response for a random sample of the proposed EqBen and it is totally wrong. </p> <br> <p align="center"> <img src="figs/llava_on_eqben.png" align="center" width="60%"> <br> We can clearly see that current open-source MLLM gives a kind of "random answers" for EqBen data. </p>
About
This study explores the concept of equivariance in vision-language foundation models (VLMs), focusing specifically on the multimodal similarity function that is not only the major training objective but also the core delivery to support downstream tasks. Unlike the existing image-text similarity objective which only categorizes matched pairs as similar and unmatched pairs as dissimilar, equivariance also requires similarity to vary faithfully according to the semantic changes. Our key contributions are three-fold:
- A novel benchmark named EqBen (Equivariant Benchmark) to benchmark VLMs with visual-minimal change samples.
- A plug-and-play regularization loss EqSim (Equivariant Similarity Learning) to improve the equivariance of current VLMs.
- Our toolkit (this repo) provide an one-stop evaluation: not only for our EqBen, but also for previous related benchmarks (Winoground, VALSE, etc).<br>
<br><br />
ToDo List
-
Opensource the light version of EqBen
-
Opensource the whole annotation of the EqBen
-
Update the EqSim implementation for FIBER
-
Update the subset of EqBen for MLLM model evaluation
-
Add a subset (10% of the full EqBen, ~25k image-text pairs) for the ease of visualization/validation (Please check it here)
<br>
What can you get from this Repo?
-
šāāļø Q: I just want to check the samples in EqBen.
š A: No problem! Please check the examples in the below figure. Or you may want to check our paper for more construction details.
-
šāāļø Q: I want to try EqBen. How can I quickly use it?
š» A: Great! This repo can help you to add EqBen evaluation into your codebase within a few lines of code. Please follow the steps here.
-
šāāļø Q: I want to also evaluate my VL model on previous Winoground or VALSE dataset.
āļø A: We also support the convenient evaluation script of Winoground and VALSE. Please follow the steps here.
-
šāāļø Q: I want to try your proposed algorithm EqSim.
š A: Please check the implementation of EqSim here.
<br><br />
EqBen
Welcome to the EqBen, which helps to benchmark your Vision-Language Pretrained (VLP) Model effectively and efficiently with a kind of fine-grained image-text matching task. Compared to recent works (Winoground and VALSE) focusing on minimal semantic changes in captions, EqBen pivots on diverse visual-minimal changes, automatically curated from time-varying visual contents in natural videos and synthetic engines with more precise control.
<br> <p align="center"> <img src="figs/eqben_overview.png" align="center" width="60%"> <br /> Core Design of our EqBen: "Visual-Minimal Change" </p>ā
This repo contains an one-stop and ready-to-use pypi toolkit, supporting multiple evaluation needs.
<br>Installation & Usage
pip install eqben
It's all set! Then it can be easily inserted into your VL model framework with little code addition. Here we provide a code template and examples (#1 and #2) for 2 popular VL models (CLIP and FIBER).
For the specific evaluation step, the users need to further download the data. Please check the following sections for details.
<br>EqBen
<p align="center"> <img src="figs/eqben_show.png" align="center" width="100%"> The overview of our proposed benchmark EqBen, which consists of 5 sub-datasets and can be categorized to natural and synthetic. </p> ā1. Data Download
-
Full-Test Set: the user can download the EqBen raw image data (tar.gz file, ~100G) and annotation (after randomize) (200M) via Google Drive. [UPDATE-2023-09] The original annotation is the annotation after randomize (non-public) for the total fairness. And the users are required to upload the results json/np file to CodaLab for getting the final results. Due to the unstability of CodaLab, we decide to public the whole original annotation. This annotation file formalized similar to Winoground and can be downloaded here.
-
Light Full-Test Set: to improve the usability, we also provide a light version of EqBen by converting all the png image to the jpg using
convert
. Feel free to download here. But please note that you may make some small revisement to the path in the annotation (change the.png
to.jpg
). -
Sub-Test Set: we also provide a 10% subset (~25K image-text pairs) for the ease of visualization and validation. The label of the EqBen sub-set is opensource and the format follows the winoground style. But please note that the samples in the subset is randomly sorted and not be classified to each category. Please down the raw image data (tar.gz file, ~10G) and annotation via Google Drive.
2. Modify Data Path
Please refer to the template (example) to modify the data path and annotation path. Then follow the example to insert EqBen evaluation code into your VL model framework.
3. Submit to Server for Score
Running the evaluation script to get the score.npy
file, then please submit to our CodaLab server after zip to obtain the final score. More details about the server evaluation please check the CodaLab website.
[UPDATE-2023-09] : As we have totally public the original annotation file, now you have 2 new options for getting the results:
- After running the evaluation script and get the
score.npy
file, directly runpython evaluate_eqben_locally.py
to get the results. This is actaully what we do on CodaLab server. - Refer to the Winoground and directly use the raw original annotation file to evaluate the results by yourself.
Winoground & VALSE
<p align="center"> <img src="figs/valse_show.png" align="center" width="100%"> The overview of the VALSE evaluation set which focuses on the textual minimal change. </p> āOur toolkit also supports the previous Winoground and VALSE benchmark. You can easily import them with following steps.
1. Data Download
The user can download the raw data by following the official website of Winoground and VALSE.
2. Modify Data Path
Please refer to the template (example) to modify the data path and annotation path. Then follow the example to insert EqBen evaluation toolkit into your VL model framework.
3. Run the Script and Check the Score
The users can just check the offline score output.
<br><br />
EqSim
Our EqSim stems from an intuitive example as below, where we depict the similarity scores produced by the current SOTA VLMs FIBER (pretrained on open-source data).
<p align="center"> <img src="figs/Eqsim_motiv.png" align="center" width="80%"> </p>We can find that, FIBER mistakenly assigns a higher similarity score to ${I_1,T_2}$ rather than ${I_1,T_1}$ ($3.83$ v.s. $3.79$). Furthermore, the changes in similarity scores guided by the semantic change (2$\leftrightarrow$3) are highly inconsistent ($+0.04$ v.s. $-1.81$). Therefore, the key idea of our EqSim is to regularize the consistency between the two simiarity changes.
Please check the sub-folder for implementation.
<br>Acknowledgement
We thank the valuable disscusions with Ziyi Dou. We thank the opensource projects of Winoground, VALSE, METER, FIBER and CLIP.