Home

Awesome

PSL (Prototypical Similarity Learning)

Official Pytorch Implementation of "Enlarge Instance-specific and Class-specific Information for Open-set Action Recognition",

Jun CEN, Shiwei Zhang, Xiang Wang, Yixuan Pei, Zhiwu Qing, Yingya Zhang, Qifeng Chen. In CVPR 2023.

Table of Contents

  1. Introduction
  2. Installation
  3. Datasets
  4. Testing
  5. Training
  6. Model Zoo
  7. Citation

Introduction

Open-set action recognition is to reject unknown human action cases which are out of the distribution of the training set. Existing methods mainly focus on learning better uncertainty scores but dismiss the importance of the feature representations. We find that features with richer semantic diversity can significantly improve the open-set performance under the same uncertainty scores. In this paper, we begin with analyzing the feature representation behavior in the open-set action recognition (OSAR) problem based on the information bottleneck (IB) theory, and propose to enlarge the instance-specific (IS) and class-specific (CS) information contained in the feature for better performance. For this reason, a novel Prototypical Similarity Learning (PSL) framework is proposed to keep the instance variance within the same class to retain more IS information. Besides, we notice that unknown samples sharing similar appearances to known samples are easily misclassified as known classes. To alleviate this issue, video shuffling is further introduced in our PSL to learn distinct temporal information between the original and shuffled samples, which we find enlarges the CS information. Extensive experiments demonstrate that the proposed PSL can significantly boost both the open-set and closed-set performance and achieves state-of-the-art results on multiple benchmarks.

Installation

This repo is developed from Alibaba-mmai-research codebase.

Requirements and Dependencies

Datasets

We follow the datasets setting with DEAR. UCF-101 is for closed-set training, and HMDB-51 and MiT-v2 are for open-set testing.

Testing

Please refer to ./evaluation_code/README.md for instructions.

Training

We provide the training code with TSM backbone as an example.

Training from scratch

python tools/run_net.py --cfg configs/projects/openset/tsm/tsm_psl.yaml

Training from K400 pre-trained model

The K400 pre-trained model is here.

python tools/run_net.py --cfg configs/projects/openset/tsm/tsm_psl_ft.yaml

We use 8 V100 (32G) or 4 A100 (80G) with batch size 128 for training. The training consumes around 16 hours.

After training, we store all data required for open-set testing in output/test/tsm_maha_distance.npz. Then run

cd tools/scripts
./compute_openness.sh

You should get the closed-set accuracy, open-set AUROC, AUPR, and FPR95.

Model Zoo

We provide the pre-trained weights (checkpoints) of TSM.

ModelCheckpointTrain ConfigClosed Set ACC (%)AUROC (%)AUPR (%)FPR95 (%)
PSLckpttrain77.3886.1064.6643.68
PSL+K400 pretrainckpttrain96.0493.3985.5126.96

Citation

@inproceedings{
jun2023enlarge
title={Enlarge Instance-specific and Class-specific Information for Open-set Action Recognition},
author={Jun Cen,  Shiwei Zhang, Xiang Wang, Yixuan Pei, Zhiwu Qing, Yingya Zhang, Qifeng Chen},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2023},
}

License

See Apache-2.0 License

Acknowledgement

In addition to the Alibaba-mmai-research codebase, this repo contains modified codes from:

We sincerely thank the owners of all these great repos!