Home

Awesome

CrosSCLR

The Official PyTorch implementation of "3D Human Action Representation Learning via Cross-View Consistency Pursuit" in CVPR 2021. The arXiv version of our paper is coming soon.

<div align=center> <img src="resource/figures/motivation.png", width="600" > </div>

Requirements

We only test our code on the following environment:

Installation

# Install python environment
$ conda create -n crossclr python=3.8.2
$ conda activate crossclr

# Install PyTorch
$ pip install torch==1.4.0

# Download our code
$ git clone https://github.com/LinguoLi/CrosSCLR.git
$ cd CrosSCLR

# Install torchlight
$ cd torchlight
$ python setup.py install
$ cd ..

# Install other python libraries
$ pip install -r requirements.txt

Data Preparation

Unsupervised Pre-Training

Linear Evaluation

Results

The Top-1 accuracy results on two datasets for the linear evaluation of our methods are shown here:

ModelNTU 60 xsub (%)NTU 60 xview (%)NTU 120 xsub (%)NTU 120 xset (%)
SkeletonCLR68.376.4--
2s-CrosSCLR74.582.1--
3s-CrosSCLR77.883.467.966.7

Visualization

The t-SNE visualization of the embeddings during SkeletonCLR and CrosSCLR pre-training.

<div align=center> <img src="resource/figures/tsne.gif", width="800" > </div>

Citation

Please cite our paper if you find this repository useful in your resesarch:

@inproceedings{li2021crossclr,
  Title          = {3D Human Action Representation Learning via Cross-View Consistency Pursuit},
  Author         = {Linguo, Li and Minsi, Wang and Bingbing, Ni and Hang, Wang and Jiancheng, Yang and Wenjun, Zhang},
  Booktitle      = {CVPR},
  Year           = {2021}
}

Acknowledgement