Home

Awesome

Unofficial PyTorch implement of Video cloze procedure for self-supervised spatio-temporal learning [AAAI'20]

Codes are mainly based on VCOP [CVPR'19]

Requirements

This is my experimental environment

PyTorch 1.3.0
python 3.7.4

Supported features

Scripts

Dataset preparation

You can follow VCOP [CVPR'19] to prepare dataset.

If you have decoded frames from videos, you can edit framefolder = os.path.join('/path/to/your/frame/folders', videoname[:-4]) in ucf101.py and directly use our provided list.

Train self-supervised part

python train_vcp.py

Retrieve video clips

python retrieve_clips.py --ckpt=/path/to/self-supervised_model

Fine-tune models for video recognition

python ft_classify --ckpt=/path/to/self-supervised_model

If you want to train models from scratch, use

python train_classify mode=train

Test models for video recognition

python train_classify --ckpt=/path/to/fine-tuned_model

Results

Retrieval results

TagModalitytop1top5top10top20top50
R3D (VCP, paper)RGB18.633.642.553.568.1
R3D (VCP, reimplemented)RGB24.241.250.360.274.8
R3D (VCP, reimplemented)Res26.344.855.065.478.7

Recognition results

The R3D here used 3D Convolution and ResNet blocks. However, the architecture is not ResNet-18-3D.

DatasetTagModalityAcc
UCF101R3D (scratch)RGB57.2
UCF101R3D (scartch)Res63.0
UCF101R3D (VCP, paper)RGB68.1
UCF101R3D (VCP, reimplemented)RGB67.4
UCF101R3D (VCP, reimplemented)Res71.3

Residual clips + 3D CNN The residual clips with 3D CNNs are effective. More information about this part can be found in Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition (previous but more detailed version) and Motion Representation Using Residual Frames with 3D CNN (short version with better results).

The key code for this part is

shift_x = torch.roll(x,1,2)
x = ((shift_x -x) + 1)/2

Which is slightly different from that in papers.

Citation

VCP

@article{luo2020video,
  title={Video cloze procedure for self-supervised spatio-temporal learning},
  author={Luo, Dezhao and Liu, Chang and Zhou, Yu and Yang, Dongbao and Ma, Can and Ye, Qixiang and Wang, Weiping},
  journal={arXiv preprint arXiv:2001.00294},
  year={2020}
}

Residual clips + 3D CNN

@article{tao2020rethinking,
  title={Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition},
  author={Tao, Li and Wang, Xueting and Yamasaki, Toshihiko},
  journal={arXiv preprint arXiv:2001.05661},
  year={2020}
}

@article{tao2020motion,
  title={Motion Representation Using Residual Frames with 3D CNN},
  author={Tao, Li and Wang, Xueting and Yamasaki, Toshihiko},
  journal={arXiv preprint arXiv:2006.13017},
  year={2020}
}