Awesome
DA-Net
This is a Pytorch implementation of IEEE Access paper DA-Net: Learning the fine-grained density distribution with deformation aggregation network.
<!-- ![](https://github.com/BigTeacher-xyx/DA-Net/blob/master/pictures/whole.gif) -->Enviroment
Getting Started
Data Preparation
Datasets | Method |
---|---|
ShanghaiTech Part A | Geometry-adaptive kernels |
ShanghaiTech Part B | Normal Fixed kernel: σ = 4 |
UCSD | Normal Fixed kernel: σ = 4 |
The WorldExpo’10 | Perspective |
UCF_CC_50 | Geometry-adaptive kernels |
TRANCOS | Normal Fixed kernel: σ = 4 |
For ShanghaiTech Part A and UCF_CC_50, use the code in "data_preparation/geometry-kernel"; For The WorldExpo’10, use the code in "data_preparation/perspective"; For UCSD and TRANCOS, use the code in "data_preparation/normal". In geometry-kernel, we augment the data by cropping 100 patches that each of them is 1/4 size of the original image. In perpective, we augment the data by cropping 10 patches that each of them is size of 256*256. In normal, data enhancement is not performed.
Run
-
Train: python train.py
a. Set pretrained_vgg16 = False b. Set fine_tune = False
-
Test: python test.py
a. Set save_output = True to save output density maps
-
pretrained model:<br> [Shanghai Tech A]<br> [Shanghai Tech B]<br>
Cite
If you use the code, please cite the following paper:
@ARTICLE{8497050,
author={Z. Zou and X. Su and X. Qu and P. Zhou},
journal={IEEE Access},
title={DA-Net: Learning the Fine-Grained Density Distribution With Deformation Aggregation Network},
year={2018},
volume={6},
number={},
pages={60745-60756},
keywords={Feature extraction;Strain;Kernel;Adaptation models;Diamond;Switches;Training;Crowd counting;deformable convolution;adaptive receptive fields;fine-grained density distribution},
doi={10.1109/ACCESS.2018.2875495},
ISSN={2169-3536},
month={},}