Home

Awesome

The repo is based on CenterNet, which aimed for push the boundary of human pose estimation

multi person pose estimation using center point detection:

Main results

Keypoint detection on COCO validation 2017

<p align="center"> <img src='readme/performance.png' align="center" height="512px"></p>
BackboneAPFPSTensorRT SpeedGFLOPsDownload
DLA-3462.723--model
Resnet-5054.52833-model
MobilenetV346.030--model
ShuffleNetV243.925--model
HRNet_W3263.816--model
HardNet46.030--model
Darknet5334.230--model
EfficientDet38.230--model

Installation

git submodule init&git submodule update Please refer to INSTALL.md for installation instructions.

Use CenterNet

We support demo for image/ image folder, video, and webcam.

First, download the model DLA-34 from the Model zoo and put them in anywhere.

Run:

cd tools; python demo.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE ../images/33823288584_1d21cf0a26_k.jpg --DEBUG 1

The result for the example images should look like:

<p align="center"> <img src='readme/multi_pose_screenshot_27.11.2019.png' align="center" height="512px"></p>

Evaluation

cd tools; python evaluate.py --cfg ../experiments/dla_34_512x512.yaml --TESTMODEL /your/model/path/dla34_best.pth --DEMOFILE --DEBUG 0

Training

After installation, follow the instructions in DATA.md to setup the datasets.

We provide config files for all the experiments in the experiments folder.

cd ./tools python -m torch.distributed.launch --nproc_per_node 4 train.py --cfg ../experiments/*yalm

Demo

the demo files located in the demo directory, which is would be a very robust human detection+tracking+face reid system.

<p align="left"> <img src="./readme/demo.gif", width="720"> </p>

License

MIT License (refer to the LICENSE file for details).

Citation

If you find this project useful for your research, please use the following BibTeX entry.

@inproceedings{zhou2019objects,
  title={Objects as Points},
  author={Zhou, Xingyi and Wang, Dequan and Kr{\"a}henb{\"u}hl, Philipp},
  booktitle={arXiv preprint arXiv:1904.07850},
  year={2019}
}