Home

Awesome

Benchmarking the robustness of Spatial-Temporal Models

This repositery contains the code for NeurIPs Benchmark and Dataset Track 2021 paper - Benchmarking the Robustness of Spatial-Temporal Models Against Corruptions.

Python 2.7 and 3.7, Pytorch 1.7+, FFmpeg are required.

Requirements

pip3 install - requirements.txt

Mini Kinetics-C

image info

Download original Kinetics400 from link.

The Mini Kinetics-C contains half of the classes in Kinetics400. All the classes can be found in mini-kinetics-200-classes.txt.

Mini Kinetics-C Leaderboard

Corruption robustness of spatial-temporal models trained on clean Mini Kinetics and evaluated on Mini Kinetics-C.

ApproachReferenceBackboneInput LengthSampling MethodClean AccuracymPCrPC
TimeSformerGedas et al.Transformer32Uniform82.271.486.9
3D ResNetK. Hara et al.ResNet-5032Uniform73.059.281.1
I3DJ. Carreira et al.InceptionV132Uniform70.557.781.8
SlowFast 8x4C. Feichtenhofer at al.ResNet-5032Uniform69.254.378.5
3D ResNetK. Hara et al.ResNet-1832Uniform66.253.380.5
TAMQ.Fan et al.ResNet-5032Uniform66.950.875.9
X3D-MC. FeichtenhoferResNet-5032Uniform62.648.677.6

For fair comparison, it is recommended to submit the result of approach which follows the following settings: Backbone of ResNet-50, Input Length of 32, Uniform Sampling at Clip Level. Any result on our benchmark can be submitted via pull request.

Mini SSV2-C

image info

Download original Something-Something-V2 datset from link.

The Mini SSV2-C contains half of the classes in Something-Something-V2. All the classes can be found in mini-ssv2-87-classes.txt.

Mini SSV2-C Leaderboard

Corruption robustness of spatial-temporal models trained on clean Mini SSV2 and evaluated on Mini SSV2-C.

ApproachReferenceBackboneInput LengthSampling MethodClean AccuracymPCrPC
TimeSformerGedas et al.Transformer16Uniform60.549.782.1
I3DJ. Carreira et al.InceptionV132Uniform58.547.881.7
3D ResNetK. Hara et al.ResNet-5032Uniform57.446.681.2
TAMQ.Fan et al.ResNet-5032Uniform61.845.773.9
3D ResNetK. Hara et al.ResNet-1832Uniform53.042.680.3
X3D-MC. FeichtenhoferResNet-5032Uniform49.940.781.6
SlowFast 8x4C. Feichtenhofer at al.ResNet-5032Uniform48.738.478.8

For fair comparison, it is recommended to submit the result of approach which follows the following settings: Backbone of ResNet-50, Input Length of 32, Uniform Sampling at Clip Level. Any result on our benchmark can be submitted via pull request.

Training and Evaluation

To help researchers reproduce the benchmark results provided in our leaderboard, we include a simple framework for training and evaluating the spatial-temporal models in the folder: benchmark_framework.

Running the code

Assume the structure of data directories is the following:

~/
  datadir/
    mini_kinetics/
      train/
        .../ (directories of class names)
          ...(hdf5 file containing video frames)
    mini_kinetics-c/
      .../ (directories of corruption names)
        .../ (directories of severity level)
          .../ (directories of class names)
            ...(hdf5 file containing video frames)

Train I3D on the Mini Kinetics dataset with 4 GPUs and 16 CPU threads (for data loading). The input lenght is 32, the batch size is 32 and learning rate is 0.01.

python3 train.py --threed_data --dataset mini_kinetics400 --frames_per_group 1 --groups 32 --logdir snapshots/ \
--lr 0.01 --backbone_net i3d -b 32 -j 16 --cuda 0,1,2,3

Test I3D on the Mini Kinetics-C dataset (pretrained model is loaded)

python3 test_corruption.py --threed_data --dataset mini_kinetics400 --frames_per_group 1 --groups 32 --logdir snapshots/ \
--pretrained snapshots/mini_kinetics400-rgb-i3d_v2-ts-max-f32-cosine-bs32-e50-v1/model_best.pth.tar --backbone_net i3d -b 32 -j 16 -e --cuda 0,1,2,3