Home

Awesome

Group Pose

This repository is an official implementation of the ICCV 2023 paper "Group Pose: A Simple Baseline for End-to-End Multi-person Pose Estimation".

☀️ If you find this work useful for your research, please kindly star our repo and cite our paper! ☀️

TODO

We are working hard on following items.

Introduction

In this paper, we study the end-to-end multi-person pose estimation and present a simple yet effective transformer approach, named Group Pose. We simply regard $K$-keypoint pose estimation as predicting a set of $N\times K$ keypoint positions, each from a keypoint query, as well as representing each pose with an instance query for scoring $N$ pose predictions.

GroupPose Structure

Motivated by the intuition that the interaction, among across-instance queries of different types, is not directly helpful, we make a simple modification to decoder self-attention. We replace single self-attention over all the $N\times(K+1)$ queries with two subsequent group self-attentions: (i) $N$ within-instance self-attention, with each over $K$ keypoint queries and one instance query, and (ii) $(K+1)$ same-type across-instance self-attention, each over $N$ queries of the same type. The resulting decoder removes the interaction among across-instance type-different queries, easing the optimization and thus improving the performance. Experimental results on MS COCO and CrowdPose show that our approach without human box supervision is superior to previous methods with complex decoders, and even is slightly better than ED-Pose that uses human box supervision.

Model Zoo

All the checkpoints can be found here(baidu & onedrive & Google Drive).

Results on MS COCO val2017 set

MethodBackboneLoss TypeAPAP<sub>50</sub>AP<sub>75</sub>AP<sub>M</sub>AP<sub>L</sub>
PETRResNet-50HM+KR68.887.576.362.777.7
PETRSwin-LHM+KR73.190.780.967.281.7
QueryPoseResNet-50BR+RLE68.788.674.463.876.5
QueryPoseSwin-LBR+RLE73.391.379.568.581.2
ED-PoseResNet-50BR+KR71.689.678.165.979.8
ED-PoseSwin-LBR+KR74.391.581.668.682.6
GroupPoseResNet-50KR72.089.479.166.879.7
GroupPoseSwin-TKR73.690.480.568.781.2
GroupPoseSwin-LKR74.891.682.169.483.0

HM, BR and KR denote heatmap, human box regression and keypoint regression.

Results on MS COCO test2017 set

MethodBackboneLoss TypeAPAP<sub>50</sub>AP<sub>75</sub>AP<sub>M</sub>AP<sub>L</sub>
PETRResNet-50HM+KR67.689.875.361.676.0
PETRSwin-LHM+KR70.591.578.765.278.0
QueryPoseSwin-LBR+RLE72.292.078.867.379.4
ED-PoseResNet-50BR+KR69.890.277.264.377.4
ED-PoseSwin-LBR+KR72.792.380.967.680.0
GroupPoseResNet-50KR70.290.577.864.778.0
GroupPoseSwin-TKR72.191.479.966.779.5
GroupPoseSwin-LKR72.892.581.067.780.3

Results on CrowdPose test set

MethodLossAPAP<sub>50</sub>AP<sub>75</sub>AP<sub>E</sub>AP<sub>M</sub>AP<sub>H</sub>
PETRHM+KR71.690.478.377.372.065.8
QueryPoseBR+RLE72.791.778.179.573.465.4
ED-PoseBR+KR73.190.579.880.573.863.8
GroupPoseKR74.191.380.480.874.766.4

All methods are with the Swin-L backbone.

Installation

Requirements

The code is developed and validated with python=3.7.10,pytorch=1.7.1,cuda=11.0. Higher versions might be as well.

  1. Create your own Python environment with Anaconda.
conda create -n grouppose python=3.7.10
  1. Activate grouppose environment and install PyTorch, torchvision, and other Python packages.
conda activate grouppose
# pytorch, torchvision
conda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=11.0 -c pytorch
# others
pip install pycocotools timm termcolor opencv-python addict yapf scipy
  1. Clone this repo.
git clone https://github.com/Michel-liu/GroupPose.git
cd GroupPose
  1. Compile CUDA operators.
cd models/grouppose/ops
python setup.py build install
# unit test (should see all checking is True)
python test.py
  1. To evaluate CrowdPose data, you also need to install the crowdposetools package following the crowdpose-api instruction.

Data preparation

For MS COCO dataset, please download and extract COCO 2017 train and val images with annotations from http://cocodataset.org. We expect the directory structure to be the following:

path/to/coco/
├── annotations/  # annotation json files
|   ├── person_keypoints_train2017.json
|   └── person_keypoints_val2017.json
└── images/
    ├── train2017/    # train images
    └── val2017/      # val images

For CrowdPose dataset, please download the images and annotations from CrowdPose Repository. The directory structure looks like this:

path/to/crowdpose/
├── json/  # annotation json files
|   ├── crowdpose_train.json
|   ├── crowdpose_val.json
|   ├── crowdpose_test.json
|   └── crowdpose_trainval.json (generated by util/crowdpose_concat_train_val.py)
└── images/
    ├── 100000.jpg
    ├── 100001.jpg
    ├── 100002.jpg
    ├── ...

Usage

We provide the command to train Group Pose on a single node with 8 gpus. The training process takes around 40 hours on a single machine with 8 A100 cards. You are also free to modify the config file config/grouppose.py to evaluate different settings.

Training on MS COCO

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --coco_path <path/to/coco/> \
    --output_dir <path/to/output>
python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --backbone swin_T_224_1k \
    --swin_pretrain_path <path/to/swin> \
    --coco_path <path/to/coco> \
    --output_dir <path/to/output>
python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --backbone swin_L_384_22k \
    --swin_pretrain_path <path/to/swin> \
    --coco_path <path/to/coco> \
    --output_dir <path/to/output> \
    --options batch_size=1

Training on CrowdPose

To train on CrowdPose with Swin-L, please add --dataset_file=crowdpose flag to your commands and modify --coco_path to your own crowdpose path (also 2 nodes total 16 cards).

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --backbone swin_L_384_22k \
    --swin_pretrain_path <path/to/swin> \
    --coco_path <path/to/crowdpose> \
    --output_dir <path/to/output> \
    --dataset_file=crowdpose \
    --options batch_size=1 num_body_points=14 epochs=80 lr_drop=70

Evaluation

You only need to add --resume <path/to/checkpoint> and --eval flags to the corresponding training command.

python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --backbone swin_L_384_22k \
    --swin_pretrain_path <path/to/swin> \
    --coco_path <path/to/coco> \
    --output_dir <path/to/output> \
    --options batch_size=1 \
    --resume <path/to/checkpoint> \
    --eval
python -m torch.distributed.launch \
    --nproc_per_node=8 \
    --master_port 29579 \
    main.py -c config/grouppose.py \
    --backbone swin_L_384_22k \
    --swin_pretrain_path <path/to/swin> \
    --coco_path <path/to/crowdpose> \
    --output_dir <path/to/output> \
    --dataset_file=crowdpose \
    --options batch_size=1 num_body_points=14 epochs=80 lr_drop=70 \
    --resume <path/to/checkpoint> \
    --eval

Similarly, you can evaluate other training settings.

License

Group Pose is released under the Apache 2.0 license. Please see the LICENSE file for more information.

Acknowledgement

This project is built on the open source repositories Deformable DETR, DINO and ED-Pose. Thanks them for their well-organized codes!

Citation

@inproceedings{liu2023GroupPose,
  title       = {Group Pose: A Simple Baseline for End-to-End Multi-person Pose Estimation},
  author      = {Liu, Huan and Chen, Qiang and Tan, Zichang and Liu, Jiangjiang and Wang, Jian and Su, Xiangbo and Li, Xiaolong and Yao, Kun and Han, Junyu and Ding, Errui and Zhao, Yao and Wang, Jingdong},
  booktitle   = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year        = {2023}
}