Home

Awesome

CenterGroup

This the official implementation of our ICCV 2021 paper

The Center of Attention: Center-Keypoint Grouping via Attention for Multi-Person Pose Estimation,
Method Visualization Guillem Brasó, Nikita Kister, Laura Leal-Taixé
We introduce CenterGroup, an attention-based framework to estimate human poses from a set of identity-agnostic keypoints and person center predictions in an image. Our approach uses a transformer to obtain context-aware embeddings for all detected keypoints and centers and then applies multi-head attention to directly group joints into their corresponding person centers. While most bottom-up methods rely on non-learnable clustering at inference, CenterGroup uses a fully differentiable attention mechanism that we train end-to-end together with our keypoint detector. As a result, our method obtains state-of-the-art performance with up to 2.5x faster inference time than competing bottom-up methods.

@article{Braso_2021_ICCV,
    author    = {Bras\'o, Guillem and Kister, Nikita and Leal-Taix\'e, Laura},
    title     = {The Center of Attention: Center-Keypoint Grouping via Attention for Multi-Person Pose Estimation},
    journal = {ICCV},
    year      = {2021}
}

Main Results

With the code contained in this repo, you should be able to reproduce the following results.

Results on COCO val2017

MethodDetectorMulti-Scale TestInput sizeAPAP.5AP .75AP (M)AP (L)
CenterGroupHigherHRNet-w3251269.087.774.459.975.3
CenterGroupHigherHRNet-w4864071.088.776.563.175.2
CenterGroupHigherHRNet-w3251271.989.078.063.777.4
CenterGroupHigherHRNet-w4864073.389.779.266.476.7

Results on COCO test2017

MethodDetectorMulti-Scale TestInput sizeAPAP .5AP .75AP (M)AP (L)
CenterGroupHigherHRNet-w3251267.688.673.662.075.6
CenterGroupHigherHRNet-w4864069.589.776.065.076.2
CenterGroupHigherHRNet-w3251270.390.076.965.477.5
CenterGroupHigherHRNet-w4864071.490.578.167.277.5

Results on CrowdPose test

MethodDetectorMulti-Scale TestInput sizeAPAP .5AP .75AP (E)AP (M)AP (H)
CenterGroupHigherHRNet-w4864067.687.672.774.268.161.1
CenterGroupHigherHRNet-w4864070.389.175.777.370.863.2

Installation

Please see docs/INSTALL.md

Model Zoo

Please see docs/MODEL_ZOO.md

Evaluation

To evaluate a model you have to specify its configuration file, its checkpoint, and the number of GPUs you want to use. All of our configurations and checkpoints are available here) For example, to run CenterGroup with a HigherHRNet32 detector and a single GPU you can run the following:

NUM_GPUS=1
./tools/dist_test.sh configs/centergroup/coco/higherhrnet_w32_coco_512x512.py models/centergroup/centergroup_higherhrnet_w32_coco_512x512.pth $NUM_GPUS 1234

If you want to use multi-scale testing, please add the --multi-scale flag, e.g.:

./tools/dist_test.sh configs/centergroup/coco/higherhrnet_w32_coco_512x512.py models/centergroup/centergroup_higherhrnet_w32_coco_512x512.pth $NUM_GPUS 1234 --multi-scale

You can also modify any other config entry with the --cfg-options entry. For example, to disable flip-testing, which is used by default, you can run:

./tools/dist_test.sh configs/centergroup/coco/higherhrnet_w32_coco_512x512.py models/centergroup/centergroup_higherhrnet_w32_coco_512x512.pth $NUM_GPUS 1234 --cfg-options model.test_cfg.flip_test=False

You may need to modify the checkpoint's path, depending on where you downloaded it, and the entry data_root in the config file, depending on where you stored your data.

Training CenterGroup

To train a model, you have to specify its configuration file and the number of GPUs you want to use. You can optionally specify the path where you want your output checkpoint and log files to be stored, as well as the identifier for this training. For example, to train CenterGroup on COCO with a HigherHRNet w32 backbone and on two GPUs with batch size you can run the following:

python tools/train.py --cfg configs/coco/centergroup/coco/higherhrnet_w32_coco_512x512.py --num_gpus 2 --out output --run_str my_training 

As with evaluation, you can use the --cfg-options entry to modify any configuration. For instance, to use batch size 24 (per GPU) run:

python tools/train.py --cfg configs/coco/centergroup/coco/higherhrnet_w32_coco_512x512.py --num_gpus 2 --out output --run_str my_training --cfg-options data.samples_per_gpu=24

Training HigherHRNet with Centers

When training CenterGroup, we first pretrain our Keypoint Detector, HigherHRNet, by adding an additional keypoint prediction corresponding to person centers. All the checkpoints and configurations are provided in the here, and the training code is borrowed from MMPose. To train a HigherHRNet w32 detector on COCO on 4 48GB GPUs, you can use the following command:

./tools/dist_train_mmpose.sh configs/higherhrnet_w_root/higherhrnet_w_root_w32_coco_512x512.py 4 --autoscale-lr --deterministic --options data.samples_per_gpu=24

Demo

TODO

Acknowledgements

Our code is based on mmpose, which reimplemented HigherHRNet's work. We thank the authors of these codebases for their great work!