Awesome
Unified Contrastive Learning in Image-Text-Label Space
This is the official Pytorch implementation of UniCL:
"Unifiled Contrastive Learning in Image-Text-Label Space. CVPR 2022" by Jianwei Yang*, Chunyuan Li*, Pengchuan Zhang*, Bin Xiao*, Ce Liu, Lu Yuan and Jianfeng Gao.
Introduction
<p align="center"> <img src="figures/unified_cv.png" width=98%/> </p>In this paper, we introduce a new perspective on commonly used image-label and image-text data by residing them in an image-text-label space. In this space, a new learning paradigm, called Unified Contrastive Learning (UniCL) with a single learning objective is proposed to seamlessly prompt the synergy of two data types. We demonstrate that UniCL is an effective way of learning semantically rich yet discriminative representations, universally for image recognition in zero-shot, linear-probe, fully finetuning and transfer learning scenarios. When scaled up to billions of data, UniCL can exclusively learn a powerful visual-semantic representation supporting dozens of downstream tasks shown in Florence.
We make the comparisons between UniCL with coventional learning methods below:
<p align="center"> <img src="figures/unicl_comparison.png" width=98%/> </p>:collision: All previous links are broken. Please find all checkpoints here: https://github.com/microsoft/UniCL/releases/tag/v1.0
Updates
- [11/24/2022] KLITE, the knowledge-augmented version of UniCL, is publicly released on Github.
- :collision: [10/05/2022] How do we use the pretrainied UniCL checkpoints? Beyond the zero-shot classification shown in our paper, we can use them for object detection. Now RegionCLIP supports to use pretrained UniCL transformer models, such as Swin, ViT for open-vocabulary object detection without any finetuning. Check it out!
- [08/19/2022] Organizing ECCV Workshop Computer Vision in the Wild (CVinW), where two challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks:
- ``Image Classification in the Wild (ICinW)'' Challenge evaluates on 20 image classification tasks.
- ``Object Detection in the Wild (ODinW)'' Challenge evaluates on 35 object detection tasks.
$\qquad$ <img src="https://computer-vision-in-the-wild.github.io/eccv-2022/static/eccv2022/img/ECCV-logo3.png" width=10%/> [Workshop] $\qquad$ <img src="https://evalai.s3.amazonaws.com/media/logos/4e939412-a9c0-46bd-9797-5ba0bd0a9095.jpg" width=10%/> [IC Challenge] $\qquad$ <img src="https://evalai.s3.amazonaws.com/media/logos/3a31ae6e-a990-48fb-b2c3-1e7da9d17a20.jpg" width=10%/> [OD Challenge]
- [06/19/2022] Released the evaluation benchmark used in UniCL, ELEVATER, which contains 20 downstream image classification tasks. More info: [Benchmark] [Toolkit] [Paper]
- [06/04/2022] Checkout out our Huggingface Gradio demo.
- [05/21/2022] Released pretrained model and zero-shot evaluation on ImageNet-1k.
Benchmarking
Image-label training augmented by image-text pairs
Model | Training Set | Top-1 on IN-1K | ZS on 14 datasets | Download |
---|---|---|---|---|
Swin-T | IN-1K | 79.9 | 30.2 | ckpt/config |
Swin-T | IN-1K + GCC-3M | 80.2 | 39.0 | ckpt/config |
Swin-T | IN-1K + GYFCC-14M | 81.1 | 40.0 | ckpt/config |
Swin-T | IN-1K + GCC-15M | 81.8 | 45.1 | ckpt/config |
Note that all the above models are trained without strong data augmentations like mixup and cutmix.
Image-text learning augmented by image-label data
Model | Training Set | ZS on IN-1K | ZS on 14 datasets | ZS on 20 datasets | Download |
---|---|---|---|---|---|
Swin-T | YFCC-14M | 30.1 | 36.3 | - | ckpt/config |
Swin-T | IN-21K | 28.5 | 37.8 | - | ckpt/config |
Swin-T | IN-22K | 66.8 | 38.9 | - | ckpt/config |
Swin-T | IN-21K (half) + YFCC-14M (half) | 36.4 | 45.5 | - | ckpt/config |
Swin-T | IN-21K + YFCC-14M | 40.5 | 49.1 | - | ckpt/config |
Swin-B | IN-21K | 29.9 | 42.4 | - | ckpt/config |
Swin-B | IN-21K (half) + YFCC-14M (half) | 41.1 | 48.5 | - | ckpt/config |
Swin-B | IN-21K + YFCC-14M | 44.3 | 52.2 | - | ckpt/config |
Swin-B | IN-21K + GCC-15M + YFCC-14M | 52.2 | - | 43.2 | ckpt/config |
Focal-B | IN-21K + GCC-15M + YFCC-14M | 54.2 | - | 44.0 | ckpt/config |
NOTE: Setting "ZS on 20 datasets" is used in the ICinW benchmark.
Getting Started
Installation
Please follow INSTALL.md for installation.
Data preparation
Please following DATA.md for data preparation.
Evaluation
To evaluate a pre-trained UniCL on ImageNet val, run:
python -m torch.distributed.launch --nproc_per_node <num-of-gpus-to-use> --master_port 12345 main.py --eval \
--cfg <config-file> --resume <checkpoint> --data-path <imagenet-path>
For example, to evaluate the UniCL-Swin-Tiny trained on YFCC-14M with a single GPU:
python -m torch.distributed.launch --nproc_per_node 1 --master_port 12345 main.py --eval \
--cfg configs/unicl_swin_tiny.yaml --resume yfcc14m.pth --data-path <imagenet-path>
The Image Classification in the Wild Benchmark
Interested in evaluating UniCL for downstream image classification tasks, and comparing performance on the same task suite? We release ELEVATER benchmark, which has 20 downstream image classification tasks. The software toolkit is also released to ease the process to onboad new models. It will be hosted as a challenge at the CV in the Wild Workshop @ ECCV 2022. We hope our benchmark and toolkit can encourage the community to solve the challenge of image classification in the wild!
Please see more instructions: [Benchmark] [Toolkit] [Paper]
Citation
If you find this repo useful to your project, please consider to cite it with following bib:
@misc{yang2022unified,
title={Unified Contrastive Learning in Image-Text-Label Space},
author={Jianwei Yang and Chunyuan Li and Pengchuan Zhang and Bin Xiao and Ce Liu and Lu Yuan and Jianfeng Gao},
year={2022},
eprint={2204.03610},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Acknowledgement
Our codebase is built based on Swin Transformer, Focal Transformer and FocalNet.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.