Home

Awesome

Awesome Video Domain Adaptation

MIT License

This repo is a comprehensive collection of awesome research (papers, codes, etc.) and other items about video domain adaptation.

Our comprehensive survey on Video Unsupervised Domain Adaptation with Deep Learning is now available. Please check our paper on arXiv.

Domain adaptation has been a focus of research in transfer learning, enabling models to improve robustness which is crucial to apply models to real-world applications. Despite a long history of domain adaptation research, there has been limited discussions on video domain adaptation. This repo aims to present a collection of research on video domain adaptation including papers, code, etc.

Feel free to star, fork or raise an issue to include your research or to add in more categories! Discussion is most welcomed!

Contents

<!-- - [Survey](#survey) --> <!-- - [Multi-Target VDA](#multi-target-vda) --> <!-- - [Universal VDA](#universal-vda) --> <!-- - [Zero-shot or Few-shot VDA](#zero-shot-or-few-shot-vda) --> <!-- - [Black-box VDA](#black-box-vda) --> <!-- - [Active VDA](#active-vda) -->

Explanatory Notes

This repository categorizes video domain adaptation papers according to the domain adaptation scenarios (i.e., closed-set, partial-set, source-free, etc.), sorted by date of publish/public appearance. These include both semi-supervised, weakly-supervised, and unsupervised DA. By default, VDA research focuses on action recognition. For other tasks, the corresponding task would be annotated independently.

Note: This repository is inspired by the ADA repository, a repository with awesome domain adaptation papers. For more research on domain adaptation (with images/point cloud etc.), you may check out that repository.

Papers

Closed-set VDA

Conference

Journal

ArXiv and Workshops

Partial-set VDA

Conference

<!-- **ArXiv and Workshops** -->

Open-set VDA

Conference

Journal

<!-- ## Universal VDA --> <!-- **Conference** --> <!-- - [Unsupervised and Semi-Supervised Domain Adaptation for Action Recognition from Drones](https://openaccess.thecvf.com/content_WACV_2020/papers/Choi_Unsupervised_and_Semi-Supervised_Domain_Adaptation_for_Action_Recognition_from_Drones_WACV_2020_paper.pdf) IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2020) -->

Multi-Source VDA

ArXiv and Workshops

Source-Free or Test-time VDA

Conference

<!-- **ArXiv and Workshops** -->

Target-Free VDA

Conference

Few-shot VDA

Conference

<!-- **ArXiv and Workshops** -->

Continual VDA

Conference

ArXiv and Workshops

Zero-shot VDA (Video Domain Generalization)

Conference

Journal

Multi-Modal VDA

The different modalities are listed for each listing.

Conference

<!-- **ArXiv and Workshops** -->

Other Topics in Video Transfer Learning

Conference

Journal

ArXiv

Datasets and Benchmarks

We collect relevant datasets designed for video domain adaptation. Datasets are designed for closed-set video domain adaptation addressing action recognition by default. Note that downloading some datasets may require permission. You are advised to download common action recognition datasets e.g., HMDB51, UCF101, Kinetics, which are commonly used in these cross-domain video datasets.

2024

2023

2021-2022

2018-2020

Before 2015

Useful Tools and Other Resources

Challenges for Video Domain Adaptation

Note: these are the latest editions of the respective challenges, please check their previous versions through their respective websites