Awesome
Rethinking Human Pose Estimation for Autonomous Driving with 3D Event Representations
The official Pytorch implementations of Efficient Human Pose Estimation via 3D Event Point Cloud, and the extension version Rethinking Event-based Human Pose Estimation with 3D Event Representations.
We propose a novel 3D event point cloud based paradigm for human pose estimation and achieve efficient results on DHP19 dataset.
We also propose the Decoupled Event Voxel (DEV) paradigm to further explore the 3D nature of events signal and achieves higher accuracy.
This main branch maintains the source code of our Rasterized Event Point Cloud (RasEPC) approach, and the DEV-Pose code can be found on the dev_pose branch. For more information, please visit the project page and paper. You can also check out the RasEPC Demo Video and the DEV-Pose Demo Video (Youtube) / DEV-Pose 演示视频 (B站).
<img src='/srcimg/pipeline.png'>Dependencies
We test the project with the following dependencies.
- pytorch == 1.8.0+cu111
- torchvision == 0.9.0+cu111
- numpy == 1.19.2
- opencv-python == 4.4.0
- h5py == 3.3.0
- Win10 or Ubuntu18.04
Getting started
Dataset preparation
Download DHP19 dataset and generate following DHP19EPC.
Folder Hierarchy
Your work space will look like this(note to change the data path in the codes to your own path):
├── DHP19EPC_dataset # Store test/train data
| ├─ ... # MeanLabel and LastLabel
├── EventPointPose # This repository
| ├─ checkpoints # Checkpoints and debug images
| ├─ dataset # Dataset
| ├─ DHP19EPC # To generate data for DHP19EPC_dataset
| ├─ evaluate # Evaluate model and save gif/mp4
| ├─ logs # Training logs
| ├─ models # Models
| ├─ P_matrices # Matrices in DHP19
| ├─ results # Store results or our pretrained models
| ├─ srcimg # Source images
| ├─ tools # Utility functions
| ├─ main.py # train/eval model
Train model
cd ./EventPointPose
# train MeanLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048 --cuda_num 0
# train LastLabel
python main.py --train_batch_size=16 --epochs=30 --num_points=2048 --model PointNet --name PointNet-2048-last --cuda_num 0 --label last
Evaluate model
You can evaluate your model and output gif as well as videos following this doc.
Pretrained Model
Our pretrained model in the paper can be found here: Baidu Cloud or Google Drive They can also be found in the Github Releases tab.
Publication
If you find our project helpful in your research, please consider cite with:
@inproceedings{chen2022EPP,
title={Efficient Human Pose Estimation via 3D Event Point Cloud},
author={Chen, Jiaan and Shi, Hao and Ye, Yaozu and Yang, Kailun and Sun, Lei and Wang, Kaiwei},
booktitle={2022 International Conference on 3D Vision (3DV)},
year={2022}
}
@article{yin2023rethinking,
title={Rethinking Event-based Human Pose Estimation with 3D Event Representations},
author={Yin, Xiaoting and Shi, Hao and Chen, Jiaan and Wang, Ze and Ye, Yaozu and Ni, Huajian and Yang, Kailun and Wang, Kaiwei},
journal={arXiv preprint arXiv:2311.04591},
year={2023}
}
For any questions, welcome to e-mail us: chenjiaan@zju.edu.cn, haoshi@zju.edu.cn, yinxiaoting@zju.edu.cn, and we will try our best to help you. =)
Acknowledgement
Thanks for these repositories: