Awesome
<img src="./assets/logo2.png" width = "330" height = "110" alt="logo" /> <div align="center"><img src="./assets/track.gif" width = "150" height = "150" alt="track" /><img src="./assets/seg.gif" width = "150" height = "150" alt="seg" /><img src="./assets/sil.gif" width = "150" height = "150" alt="sil" /></div>πππ OpenGait has been accpected by CVPR2023 as a highlight paperοΌ πππ
All-in-One-Gait is a sub-project of OpenGait provided by Shiqi Yu Group that develops a gait recognition system.
The workflow of All-in-One-Gait primarily involves the processes of pedestrian tracking, segmentation and recognition.
Users are encougraed to update the gait recognition models with watching the lastest SOTA methods in OpenGait.
Demo Results
<div align="center"> <img src="./OpenGait/demo/output/demo_video_result/gallery.gif" width = "144" height = "256" alt="gallery" /> <img src="./OpenGait/demo/output/demo_video_result/probe1-After.gif" width = "455" height = "256" alt="probe1-After" /> <img src="./OpenGait/demo/output/demo_video_result/probe2-After.gif" width = "144" height = "256" alt="probe2-After" /> </div> The participants shown in the left video are gallery subjects, and that of other two videos are probe subjects.The recognition results are represented by the color of the bounding boxes.
<!-- The videos in `./output/demo_video_result` are all generated by main.py, where `gallery.mp4` is the gallery, and the other `probe-After.mp4` are the result videos after gait recognition. **Among them, people with the same ID are those with the same bounding box color**. -->How to use
A. Quick Start in Colab (Recommended)
B. Run on the host machine
Step1. Installation
git clone https://github.com/jdyjjj/All-in-One-Gait.git
cd All-in-One-Gait
pip install -r requirements.txt
pip install yolox
Step2. Get checkpoints
demo
|ββββββcheckpoints
| βββββββbytetrack_model
| βββββββgait_model
| βββββββseg_model
βββββββlibs
βββββββoutput
checkpoints
|ββββββbytetrack_model
| βββββββbytetrack_x_mot17.pth.tar
| βββββββyolox_x_mix_det.py
|
βββββββgait_model
| βββββββxxxx.pt
βββββββseg_model
βββββββhuman_pp_humansegv2_mobile_192x192_inference_model_with_softmax
Get the checkpoint of gait model
cd All-in-One-Gait/OpenGait/demo/checkpoints
mkdir gait_model
cd gait_model
wget https://github.com/ShiqiYu/OpenGait/releases/download/v2.0/pretrained_grew_gaitbase.zip
unzip -j pretrained_grew_gaitbase.zip
Get the checkpoint of tracking model
cd All-in-One-Gait/OpenGait/demo/checkpoints/bytetrack_model
pip install --upgrade --no-cache-dir gdown
gdown https://drive.google.com/uc?id=1P4mY0Yyd3PPTybgZkjMYhFri88nTmJX5
Alternatively, you can manually download the checkpoint file and put it into the folder of bytetrack_model
.
- bytetrack_x_mot17 [google], [baidu(code:ic0i)]
Get the checkpoint of segment model
cd All-in-One-Gait/OpenGait/demo/checkpoints
mkdir seg_model
cd seg_model
wget https://paddleseg.bj.bcebos.com/dygraph/pp_humanseg_v2/human_pp_humansegv2_mobile_192x192_inference_model_with_softmax.zip
unzip human_pp_humansegv2_mobile_192x192_inference_model_with_softmax.zip
Step3. Run demo
cd All-in-One-Gait/OpenGait
python demo/libs/main.py
All-in-One-Gait mainly consists of three processes, i.e., pedestrian tracking, segmentation, and recognition.
In the main.py
, you need to give two video as inputs and specify one as the gallery and other one as the probe to obtain the recognized results.
The return results will be written into the path of All-in-One-Gait/OpenGait/demo/output/Outputvideos/track_vis/{timestamp}
in default.
Step4. See the result
cd All-in-One-Gait/OpenGait/demo/output
output
βββββββGaitFeatures: This stores the corresponding gait features
βββββββGaitSilhouette: This stores the corresponding gait silhouette images
βββββββInputVideos: This is the folder where the input videos are put
| βββββββgallery.mp4
| βββββββprobe1.mp4
| βββββββprobe2.mp4
| βββββββprobe3.mp4
| βββββββprobe4.mp4
βββββββOutputVideos
βββββββ{timestamp}
βββββββgallery.mp4
βββββββG-gallery_P-probe1.mp4
βββββββG-gallery_P-probe2.mp4
βββββββG-gallery_P-probe3.mp4
βββββββG-gallery_P-probe4.mp4
{timestamp}: Store the result video of tracking here, naming it consistent with the input video. In addition, videos named like G-{gallery_video_name}_P-{probe_video_name}.mp4 are obtained after gait recognition.
Authors
OpenGait Team (OGT)
- Dongyang Jin(ιε¬ι³), 11911221@mail.sustech.edu.cn
- Chao Fan (ζ¨θΆ ), 12131100@mail.sustech.edu.cn
- Rui Wang(ηηΏ), 12232385@mail.sustech.edu.cn
- Chuanfu Shen (ζ²ε·η¦), 11950016@mail.sustech.edu.cn
- Junhao Liang (ζ’ε³»θ±ͺ), 12132342@mail.sustech.edu.cn
Acknowledgement
Citation
@InProceedings{Fan_2023_CVPR,
author = {Fan, Chao and Liang, Junhao and Shen, Chuanfu and Hou, Saihui and Huang, Yongzhen and Yu, Shiqi},
title = {OpenGait: Revisiting Gait Recognition Towards Better Practicality},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {9707-9716}
}
Note: This code is only used for academic purposes, people cannot use this code for anything that might be considered commercial use.