Awesome
Concatenated Masked Autoencoders as Spatial-Temporal Learner: A PyTorch Implementation
<p align="center"> <img src="https://github.com/minhoooo1/CatMAE/blob/master/figures/arch.png" width="800"> </p>This is a PyTorch re-implementation of the paper Concatenated Masked Autoencoders as Spatial-Temporal Learner:
Requirements
- pytorch (2.0.1)
timm==0.4.12
- decord
Data Preparation
We use two datasets, Kinetics-400 and DAVIS-2017, for training and downstream tasks in total.
- Kinetics-400 used in our experiment comes from here.
- DAVIS-2017 used in our experiment comes from here
Pre-training
The arguments set in the config_file
will be used first
To pre-train CatMAE-ViT-Small, run the following commond:
python main_pretrain.py --config_file configs/pretrain_catmae_vit-s-16.json
Some important arguments
- The
data_path
is /path/to/Kinetics-400/videos_train/ - The effective batch size is
batch_size
(256) * num ofgpus
(4) *accum_iter
(2) = 2048 - The effective epochs is
epochs
(150) *repeated_sampling
(2) = 300 - The default
model
is catmae_vit_small (with default patch_size and decoder_dim_dep_head), and for training VIT-B, you can alse change it to catmae_vit_base. - Here we use
--norm_pix_loss
as the target for better representation learning. blr
is the base learning rate. The actuallr
is computed by the linear scaling rule:lr
=blr
* effective_batch_size / 256.
Pre-trained checkpoints
The following table provides the pre-trained checkpoints used in the paper
<table><tbody> <!-- START TABLE --> <!-- TABLE HEADER --> <th valign="bottom"></th> <th valign="bottom">ViT/16-Small</th> <th valign="bottom">ViT/8-Small</th> <!-- TABLE BODY --> <tr><td align="left">pre-trained checkpoint</td> <td align="center"><a href="https://drive.google.com/file/d/1xWrpSxZy6d3r_XnsZmXvqM1XUReJ7v97/view?usp=drive_link">download</a></td> <td align="center"><a href="https://drive.google.com/file/d/1ksYZJPa2pZ-NYWjYKLh05-bt_A40Rhm7/view?usp=drive_link">download</a></td> </tr> <tr><td align="left">DAVIS 2017 J&Fm</td> <td align="center">62.5</td> <td align="center">70.4</td> </tr> </tbody></table>Video segment in DAVIS-2017
The Video segment instruction is in DAVIS.md.
Action recognition in Kinetics-400
The Action recognition instruction is in KINETICS400.md.