Home

Awesome

Code for our CVPR 2021 paper Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection

Getting Started

Installation

pytorch=0.4.1 torchvision=0.2.1

git clone https://github.com/SherlockHolmes221/GGNet.git
cd GGNet
pip install -r requirements.txt
cd src/lib/models/networks/DCNv2
./make.sh

Training and Test

Dataset Preparation

  1. HICO-DET Organize them in Dataset folder as follows:

    |-- Dataset/
    |   |-- <hico-det>/
    |       |-- images
                |-- test2015
                |-- train2015
    |       |-- annotations
    

    The annotations is provided here

  2. V-COCO Organize them in Dataset folder as follows:

    |-- Dataset/
    |   |-- <verbcoco>/
    |       |-- images
                |-- val2014
                |-- train2014
    |       |-- annotations
    

    The annotations is provided here

  3. Download the pre-trained models trained on COCO object detection dataset provided by CenterNet.Hourglass104). Put them into the models folder.

Training and Testing

sh experiments/hico/hoidet_hico_hourglass.sh 
sh experiments/vcoco/hoidet_vcoco_hourglass.sh 

Evalution

python src/lib/eval/hico_eval_de_ko.py --exp hoidet_hico_ggnet 
python src/lib/eval/vcoco_eval.py --exp hoidet_vcoco_ggnet 

Results on HICO-DET and V-COCO

Our Results on HICO-DET dataset

ModelFull (def)Rare (def)None-Rare (def)Full (ko)Rare (ko)None-Rare (ko)FPSDownload
hourglass10423.4716.4825.6027.3620.2329.489model

Our Results on V-COCO dataset

ModelAProleDownload
hourglass10454.7model

Citation

@inproceedings{zhong2021glance,
  title={Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection},
  author={Zhong, Xubin and Qu, Xian and Ding, Changxing and Tao, Dacheng},
  booktitle={CVPR},
  year={2021}
}

Acknowledge

PPDM