Home

Awesome

Conference Paper

SelTDA

This repository will hold the official code of SelTDA, the self-training framework introduced in our CVPR 2023 paper "Q: How to Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!".

seltda_teaser

Environment

conda env create -f environment.yaml

Data

Downloads and Preprocessing

In general, the code expects that each VQA dataset is represented by a single JSON object that is a list of dictionaries. In schemas.py, we provide Pydantic models which you can use to define your own datasets or verify that the data is in the correct format.

Experiments

See the examples/ directory to see examples of:

Citation

@InProceedings{Khan_2023_CVPR,
    author    = {Khan, Zaid and BG, Vijay Kumar and Schulter, Samuel and Yu, Xiang and Fu, Yun and Chandraker, Manmohan},
    title     = {Q: How To Specialize Large Vision-Language Models to Data-Scarce VQA Tasks? A: Self-Train on Unlabeled Images!},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {15005-15015}
}

Acknowledgements

This code is heavily based on salesforce/BLIP.