Awesome
The repo is based on CenterNet, which aimed for push the boundary of human pose estimation
multi person pose estimation using center point detection:
Main results
Keypoint detection on COCO validation 2017
<p align="center"> <img src='readme/performance.png' align="center" height="512px"></p>Backbone | AP | FPS | TensorRT Speed | GFLOPs | Download |
---|---|---|---|---|---|
DLA-34 | 62.7 | 23 | - | - | model |
Resnet-50 | 54.5 | 28 | 33 | - | model |
MobilenetV3 | 46.0 | 30 | - | - | model |
ShuffleNetV2 | 43.9 | 25 | - | - | model |
HRNet_W32 | 63.8 | 16 | - | - | model |
HardNet | 46.0 | 30 | - | - | model |
Darknet53 | 34.2 | 30 | - | - | model |
EfficientDet | 38.2 | 30 | - | - | model |
Installation
git submodule init&git submodule update Please refer to INSTALL.md for installation instructions.
Use CenterNet
We support demo for image/ image folder, video, and webcam.
First, download the model DLA-34 from the Model zoo and put them in anywhere.
Run:
cd tools; python demo.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE ../images/33823288584_1d21cf0a26_k.jpg --DEBUG 1
The result for the example images should look like:
<p align="center"> <img src='readme/multi_pose_screenshot_27.11.2019.png' align="center" height="512px"></p>Evaluation
cd tools; python evaluate.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE --DEBUG 0
Training
After installation, follow the instructions in DATA.md to setup the datasets.
We provide config files for all the experiments in the experiments folder.
cd ./tools python -m torch.distributed.launch --nproc_per_node 4 train.py --cfg ../experiments/*yalm
Demo
the demo files located in the demo
directory, which is would be a very robust human detection+tracking+face reid system.
License
MIT License (refer to the LICENSE file for details).
Citation
If you find this project useful for your research, please use the following BibTeX entry.
@inproceedings{zhou2019objects,
title={Objects as Points},
author={Zhou, Xingyi and Wang, Dequan and Kr{\"a}henb{\"u}hl, Philipp},
booktitle={arXiv preprint arXiv:1904.07850},
year={2019}
}