Home

Awesome

IIM - Crowd Localization


This repo is the official implementation of paper: Learning Independent Instance Maps for Crowd Localization. The code is developed based on C3F. framework

Progress

Getting Started

Preparation

   -- ProcessedData
   	|-- NWPU
   		|-- images
   		|   |-- 0001.jpg
   		|   |-- 0002.jpg
   		|   |-- ...
   		|   |-- 5109.jpg
   		|-- masks
   		|   |-- 0001.png
   		|   |-- 0002.png
   		|   |-- ...
   		|   |-- 3609.png
   		|-- train.txt
   		|-- val.txt
   		|-- test.txt
   		|-- val_gt_loc.txt
   -- PretrainedModels
     |-- hrnetv2_w48_imagenet_pretrained.pth
   -- IIM
     |-- datasets
     |-- misc
     |-- ...

Training

Computational Cost

Testing and Submitting

Visualization on the val set

Performance

The results (F1, Pre., Rec. under the sigma_l) and pre-trained models on NWPU val set, UCF-QNRF, SHT A, SHT B, and FDST:

MethodNWPU valUCF-QNRFSHT A
Paper: VGG+FPN [2,3]77.0/80.2/74.168.8/78.2/61.572.5/72.6/72.5
This Repo's Reproduction: VGG+FPN [2,3]77.1/82.5/72.367.8/75.7/61.571.6/75.9/67.8
Paper: HRNet [1]80.2/84.1/76.672.0/79.3/65.973.9/79.8/68.7
This Repo's Reproduction: HRNet [1]79.8/83.4/76.572.0/78.7/66.476.1/79.1/73.3
MethodSHT BFDSTJHU
Paper: VGG+FPN [2,3]80.2/84.9/76.093.1/92.7/93.5-
This Repo's Reproduction: VGG+FPN [2,3]81.7/88.5/75.993.9/94.7/93.161.8/73.2/53.5
Paper: HRNet [1]86.2/90.7/82.195.5/95.3/95.862.5/74.0/54.2
This Repo's Reproduction: HRNet [1]86.0/91.5/81.095.7/96.9 /94.464.0/73.3/56.8

References

  1. Deep High-Resolution Representation Learning for Visual Recognition, T-PAMI, 2019.
  2. Very Deep Convolutional Networks for Large-scale Image Recognition, arXiv, 2014.
  3. Feature Pyramid Networks for Object Detection, CVPR, 2017.

About the leaderboard on the test set, please visit Crowd benchmark. Our submissions are the IIM(HRNet) and IIM (VGG16).

Video Demo

We test the pretrained HR Net model on the NWPU dataset in a real-world subway scene. Please visit bilibili or YouTube to watch the video demonstration. val_curve

Citation

If you find this project is useful for your research, please cite:

@article{gao2020learning,
  title={Learning Independent Instance Maps for Crowd Localization},
  author={Gao, Junyu and Han, Tao and Yuan, Yuan and Wang, Qi},
  journal={arXiv preprint arXiv:2012.04164},
  year={2020}
}

Our code borrows a lot from the C^3 Framework, and you may cite:

@article{gao2019c,
  title={C$^3$ Framework: An Open-source PyTorch Code for Crowd Counting},
  author={Gao, Junyu and Lin, Wei and Zhao, Bin and Wang, Dong and Gao, Chenyu and Wen, Jun},
  journal={arXiv preprint arXiv:1907.02724},
  year={2019}
}

If you use pre-trained models in this repo (HR Net, VGG, and FPN), please cite them.