Awesome
PCC Net: Perspective Crowd Counting via Spatial Convolutional Network
This is an official implementation of the paper "PCC net" (PCC Net: Perspective Crowd Counting via Spatial Convolutional Network).
In the paper, the experiments are conducted on the three populuar datasets: Shanghai Tech, UCF_CC_50 and WorldExpo'10. To be specific, Shanghai Tech Part B contains crowd images with the same resolution. For easier data prepareation, we only release the pre-trained model on ShanghaiTech Part B dataset in this repo.
Bracnhes
- ori_pt0.2_py2: the original version.
- ori_pt1_py3: the current version.
- vgg_pt1_py3: vgg-backbone PCC Net (higher performance).
Requirements
- Python 3.x
- Pytorch 1.x
- TensorboardX (pip)
- torchvision (pip)
- easydict (pip)
- pandas (pip)
Data preparation
- Download the original ShanghaiTech Dataset [Link: Dropbox / BaiduNetdisk]
- Resize the images and the locations of key points.
- Generate the density maps by using the code.
- Generate the segmentation maps.
We also provide the processed Part B dataset for training. [Link]
Training model
- Run the train_lr.py:
python train_lr.py
. - See the training outputs:
Tensorboard --logdir=exp --port=6006
.
In the experiments, training and tesing 800 epoches take 21 hours on GTX 1080Ti.
Expermental results
Quantitative results
We show the Tensorboard visualization results as below: The mae and mse are the results on test set. Others are triaining loss.
Visualization results
Visualization results on the test set as below: Column 1: input image; Column 2: density map GT; Column 3: density map prediction; Column 4: segmentation map GT; Column 5: segmentation map prediction.
Citation
If you use the code, please cite the following paper: