Awesome
Vision Transformers are Parameter-Efficient Audio-Visual Learners
<img src="https://raw.githubusercontent.com/facebookresearch/unbiased-teacher/main/teaser/pytorch-logo-dark.png" width="10%">
<p align="center"> <img src="https://genjib.github.io/project_page/LAVISH/assets/teaser.png" width="50%"> </p>This is the PyTorch implementation of our paper: <br>
Vision Transformers are Parameter-Efficient Audio-Visual Learners <br>
Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, and Gedas Bertasius<br>
<font color=#008000>In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023 </font>
<br><br>Our Method<br>
<p align="center"> <img src="https://genjib.github.io/project_page/LAVISH/assets/method.png" width="70%"> </p>๐ Preparation
See each foloder for more detailed settings
- Audio-Visual Event Localization: ./AVE
- Audio-Visual Segmentation: ./AVS
- Audio-Visual Question Answering: ./AVQA
๐ Cite
If you use this code in your research, please cite:
@InProceedings{LAVISH_CVPR2023,
author = {Lin, Yan-Bo and Sung, Yi-Lin and Lei, Jie and Bansal, Mohit and Bertasius, Gedas},
title = {Vision Transformers are Parameter-Efficient Audio-Visual Learners},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2023}
}
๐ Acknowledgments
Our code is based on AVSBench and MUSIC-AVQA
โ Future works: model checkpoints
Tasks | Checkpoints |
---|---|
AVE | model |
AVS | model |
AVQA | model |