Home

Awesome

Cross-Covariate Gait Recognition: A Benchmark

Welcome to the official repository for the paper "Cross-Covariate Gait Recognition: A Benchmark," which has been accepted to AAAI 2024.

Paper Links

Dataset Download Guide

CCGR Dataset

Derived data (Silhouette, Parsing, Pose)

We are pleased to offer the derived data for use in your research projects. You can download the data directly when you agree to comply with this Licence: CC BY-NC-ND (Creative Commons Attribution-NonCommercial-NoDerivatives).

Raw data (RGB)

Please sign this agreement and send it to the email address (zoushinan@csu.edu.cn). We will process your request email as soon as possible. Below are several optional ways to access RGB data; please let us know your choice in the email when you write.

  1. Baidu Netdisk Link
  2. OneDrive Link
  3. Mailing Services. You can mail us a hard drive; we will copy the data to the hard drive and return it to you. (The hard drive needs to be larger than 2T, USB3.0 interface. Only supports postal destinations within mainland China.)

Note: We're very sorry. Due to government network regulations in China (Internet Transmission Restrictions), we need more time to upload the RGB data to OneDrive, which means the OneDrive option is currently unavailable. (Because the CCGR raw data is too big)

CCGR-Mini Dataset

CCGR-Mini is a subset of CCGR. The CCGR-Mini is smaller in size and can speed up your research.

We construct CCGR-Mini by extracting data from CCGR as follows. The 53 covariates for each human are retained, but of the 33 views under each covariate, one is randomly selected as data for the CCGR-Mini, and the remaining views are discarded. This way, each person still retains 53 covariates and enough views to maintain the original challenge. Moreover, with only 53 videos per person, data is significantly reduced.

CCGR-MINI has 970 subjects, 47,884 sequences, 53 different covariates, and 33 different views.

We are pleased to offer the data for use in your research projects. You can download the data directly when you agree to comply with this Licence: CC BY-NC-ND (Creative Commons Attribution-NonCommercial-NoDerivatives).

Derived data (Silhouette, Parsing, Pose)

Raw data (RGB)

Preview: Two new collected datasets CCGR-? and CCGR-? are coming.

Code

We have uploaded the code, which is modified from OpenGait. The main changes are listed below:

  1. Compatible with CCGR and CCGR-Mini datasets. (The run.sh file contains commands to run all compatible algorithms.)
  2. To be updated

Results

CCGR

Using silhouette (%)

MethodsR1^hardR1^easyR5^hardR5^easy
GaitSet25.335.346.758.9
GaitPart22.632.742.955.5
GaitGL23.135.239.954.1
GaitBase31.343.851.364.4
DeepGaitV242.555.263.275.2

Using parsing (%)

MethodsR1^hardR1^easyR5^hardR5^easy
GaitSet31.642.854.867.0
GaitPart29.040.951.564.5
GaitGL28.442.146.661.4
GaitBase48.162.067.779.6
DeepGaitV258.871.877.087.0

CCGR-Mini

Using silhouette (%)

MethodsR1mAPmINP
GaitSet13.7715.395.75
GaitPart8.0210.123.52
GaitGL17.5118.126.85
GaitBase26.9924.899.72
DeepGaitV239.3736.0116.77

Using parsing (%)

MethodsR1mAPmINP
GaitSet18.0919.187.38
GaitPart10.612.294.25
GaitGL22.5322.589.06
GaitBase38.9635.4816.08
DeepGaitV250.4346.5324.43

Cite Us

If you find our dataset or paper useful for your research, please consider citing:

@article{Zou_2024, 
title={Cross-Covariate Gait Recognition: A Benchmark}, 
volume={38}, number={7}, 
journal={Proceedings of the AAAI Conference on Artificial Intelligence}, 
author={Zou, Shinan and Fan, Chao and Xiong, Jianbo and Shen, Chuanfu and Yu, Shiqi and Tang, Jin}, 
year={2024}, month={Mar.}, pages={7855-7863} }

Contact

If you have any questions, please contact (zoushinan@csu.edu.cn)

Correct Some Mistakes

  1. In the paper "Cross-Covariate Gait Recognition: A Benchmark" (AAAI2024 Version). The positions of R5^easy and R5^hard are reversed in Table 2. The rest of the data in the table is fine. We are sorry that this is our mistake.

  2. In the paper "Cross-Covariate Gait Recognition: A Benchmark" (AAAI2024 Version). In Table 3, the experimental results of GaitBase and DeepGaitV2 are using batch size 8 X 8, which we mistakenly took as 8 X 16 when we recorded the data. The correct results have been placed in the results section of this note. The good thing is that this mistake does not affect the conclusion of this paper. Sorry, it was our mistake. Sincerely, please forgive us for our mistakes; it is not easy to deal with such a massive amount of data (raw, unprocessed data > 8TB).