Home

Awesome

FEDVSSL

This is a general purpose repository for Federated Self-Superivised Learning for video understanding build on top of MMCV and Flower.

<p align="center"> <img src="https://github.com/yasar-rehman/FEDVSSL/blob/master/FVSSL.png"/> </p>

Authors

Note:

As of December 2023, FedVSSL is now part of the Flower baselines.

Dataset

For both centralized and federated video SSL pretraining, we use Kinetics-400. We evaluate the quality of learned representations by applying them on two downstream datasets: UCF-101 and HMDB-51.

As a part of reproducibility, we have provideded the dataset partitions of the Kinetics-400 dataset for federated learning in the Data folder with iid and non-iid data distribution.

One can generate the non-iid version of kinetics-400 with 100 clients (8 classes per client) by running: <br> python scripts/k400_non_iid.py <br>

The iid version of kinetics-400 with 100 clients (8 classes per client) can be generated by running: <br> python scripts/kinetics_json_splitter.py <br>

Caution:

Note that the above two python files assume that you have already downloaded the official trainlist of kineticss-400.

FL pretrained Models

We provide a series of federated-SSL pretrined models of VCOP, Speed, and CtP. All these models are federated pretrained on the non-iid version of Kinetics-400 (8 classes/client) see Table 1 in the manuscript. The annotations can be found in the Data/Kinetics-400_annotations/ in this repository.

MethodsFL Pretrained Models
VCOPVCOP5c1e540r
SpeedSpeed5c1e540r
CtPCtp5c1e540r

News

Dependencies

For a complete list of the required packages please see the requirement.txt file. One can easily install, all the requirement by running pip install -r requirement.txt.

Instructions

We recommend installing Microsoft CtP Framework as it contain all the Self-supervised learning frameworks build on top of MMCV framework. Here we provided a modifed version of that framework for FedVSSL, in particular.

Running Experiments

The abstract definition of classes are provided by reproduce_papers/fedssl/videossl.py. <br>

MethodPython codeDescription
FedAvgmain.pyFederate the SSL method using the conventional FedAvg method
FedVSSL $(\alpha=0, \beta=0)$main_cam_st_theta_b_wo_moment.pyImplementation of FedAvg but with only aggregating the backbone network
FedVSSL $(\alpha=1, \beta=0)$main_cam_st_theta_b_loss_wo_moment.pyImplementation of loss-based aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0, \beta=1)$main_cam_st_theta_b_FedAvg_+SWA_wo_moment.pyImplementation of FedAvg+SWA aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=1, \beta=1)$main_cam_st_theta_b_loss_+SWA_wo_moment.pyImplementation of loss-based+SWA aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0.9, \beta=0)$main_cam_st_theta_b_mixed_wo_mement.pyImplementation of FedAvg+loss-based aggregation but with only aggregating the backbone network
FedVSSL $(\alpha=0.9, \beta=1)$main_cam_st_theta_b_mixed_+SWA_wo_mement.pyImplementation of FedAvg+loss-based+SWA aggregation but with only aggregating the backbone network

Evaluation

After FL pretraining one can use the following code to fine-tune the model on UCF or HMDB.

import subprocess
import os
import CtP 
process_obj = subprocess.run(["bash", "CtP/tools/dist_train.sh",\
"CtP/configs/ctp/r3d_18_kinetics/finetune_ucf101.py", "4",\
f"--work_dir /finetune/ucf101/",
f"--data_dir /DATA",\
f"--pretrained /path to the pretrained checkpoint",\
f"--validate"])

Expected Results

For the detailed results regarding the below checkpoints, please see Table 4 in the manuscript.

MethodCheckpoint fileUCF R@1HMDB R@1
FedVSSL$(\alpha=0, \beta=0)$round-540.npz34.3415.82
FedVSSL$(\alpha=1, \beta=0)$round-540.npz34.2316.73
FedVSSL$(\alpha=0, \beta=1)$round540.npz35.6116.93
FedVSSL$(\alpha=1, \beta=1)$round540.npz35.6616.41
FedVSSL$(\alpha=0.9, \beta=0)$round-540.npz35.5016.27
FedVSSL$(\alpha=0.9, \beta=1)$round-540.npz35.3416.93

Issues:

If you encounter any issues, feel free to open an issue in the github

Citations

@article{rehman2022federated,
  title={Federated Self-supervised Learning for Video Understanding},
  author={Rehman, Yasar Abbas Ur and Gao, Yan and Shen, Jiajun and de Gusmao, Pedro Porto Buarque and Lane, Nicholas},
  journal={arXiv preprint arXiv:2207.01975},
  year={2022}
}

Acknowledgement

We would like to thank Daniel J. Beutel for providing the initial blueprint of Federated self-supervised learning with flower. Also thanks to Akhil Mathur for providing the useful suggestions.