Home

Awesome

Vision Transformers are Parameter-Efficient Audio-Visual Learners

๐Ÿ“—Paper|| ๐Ÿ Project Page

License: MIT <img src="https://raw.githubusercontent.com/facebookresearch/unbiased-teacher/main/teaser/pytorch-logo-dark.png" width="10%">

<p align="center"> <img src="https://genjib.github.io/project_page/LAVISH/assets/teaser.png" width="50%"> </p>

This is the PyTorch implementation of our paper: <br>

Vision Transformers are Parameter-Efficient Audio-Visual Learners <br>

Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, and Gedas Bertasius<br>

<font color=#008000>In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023 </font>

<br>

<br>Our Method<br>

<p align="center"> <img src="https://genjib.github.io/project_page/LAVISH/assets/method.png" width="70%"> </p>

๐Ÿ“ Preparation

๐ŸŽ“ Cite

If you use this code in your research, please cite:

@InProceedings{LAVISH_CVPR2023,
author = {Lin, Yan-Bo and Sung, Yi-Lin and Lei, Jie and Bansal, Mohit and Bertasius, Gedas},
title = {Vision Transformers are Parameter-Efficient Audio-Visual Learners},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2023}
}

๐Ÿ‘ Acknowledgments

Our code is based on AVSBench and MUSIC-AVQA

โœ Future works: model checkpoints

TasksCheckpoints
AVEmodel
AVSmodel
AVQAmodel