Home

Awesome

Official PyTorch Implementation of EVAD

EVAD Framework

Efficient Video Action Detection with Token Dropout and Context Refinement<br>Lei Chen, Zhan Tong, Yibing Song, Gangshan Wu, Limin Wang<br>

News

[2023.07.14] Our EVAD is accepted by ICCV 2023! <br> [2023.06.09] Code and model weights have been released! <br>

Installation

Please find installation instructions in INSTALL.md.

Data Preparation

Please follow the instructions in DATASET.md to prepare AVA dataset.

Model Zoo

methodkeep rateenhanced weightconfigbackbonepre-train#frame x sample rateGFLOPsmAPmodel
EVAD1.01ViT_B_16x4ViT-B (VideoMAE)K40016x442532.1link
EVAD0.71ViT_B_16x4_KTPViT-B (VideoMAE)K40016x424332.3link
EVAD0.64ViT_B_16x4_KTP_EWViT-B (VideoMAE)K40016x420931.8link
EVAD0.71ViT_B_16x4_KTPViT-B (VideoMAEv2)K710+K40016x424337.7link
EVAD0.71ViT_L_16x4_KTPViT-L (VideoMAE)K70016x473739.7link

Training

python -m torch.distributed.launch --nproc_per_node=8 projects/evad/run_net.py --cfg "projects/evad/configs/config_file.yaml" DATA.PATH_TO_DATA_DIR "path/to/ava" TRAIN.CHECKPOINT_FILE_PATH "path/to/pretrain.pth" OUTPUT_DIR "path/to/output"

Validation

You can load specific checkpoint file with TEST.CHECKPOINT_FILE_PATH or autoload the last checkpoint from the output folder.

python -m torch.distributed.launch --nproc_per_node=1 projects/evad/run_net.py --cfg "projects/evad/configs/config_file.yaml" DATA.PATH_TO_DATA_DIR "path/to/ava" TRAIN.ENABLE False TEST.ENABLE True NUM_GPUS 1 OUTPUT_DIR "path/to/output"

Acknowledgements

This project is built upon SparseR-CNN and PySlowFast. We also reference and use some code from WOO and VideoMAE. Thanks to the contributors of these great codebases.

License

The majority of this project is released under the CC-BY-NC 4.0 license as found in the LICENSE file. Portions of the project are available under separate license terms: SlowFast and pytorch-image-models are licensed under the Apache 2.0 license. SparseR-CNN is licensed under the MIT license.

Citation

If you find this project useful, please feel free to leave a star and cite our paper:

@inproceedings{chen2023efficient,
    author    = {Chen, Lei and Tong, Zhan and Song, Yibing and Wu, Gangshan and Wang, Limin},
    title     = {Efficient Video Action Detection with Token Dropout and Context Refinement},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year      = {2023}
}

@article{chen2023efficient,
  title={Efficient Video Action Detection with Token Dropout and Context Refinement},
  author={Chen, Lei and Tong, Zhan and Song, Yibing and Wu, Gangshan and Wang, Limin},
  journal={arXiv preprint arXiv:2304.08451},
  year={2023}
}