Home

Awesome

AutoScale_localization

Structure

AutoScale_localization
|-- data            # generate target
|-- model           # model path 
|-- README.md       # README
|-- centerloss.py           
|-- config.py          
|-- dataset.py       
|-- find_contours.py           
|-- fpn.py         
|-- image.py
|-- make_npydata.py
|-- rate_model.py
|-- val.py        

Visualizations

Some localization-based results.

avatar

Qualitative visualization of distance label maps given by the proposed AutoScale.

Result of detected person locations

. avatar

Red points are the ground-truth. To more clearly present our localization results, we generate bounding boxes (green boxes) according to the KNN distance of each point, which follows and compares with LSC-CNN.

Environment

python >=3.6 <br /> pytorch >=1.0 <br /> opencv-python >=4.0 <br /> scipy >=1.4.0 <br /> h5py >=2.10 <br /> pillow >=7.0.0<br /> imageio >=1.18

Datasets

Generate target

cd data<br /> Edit "distance_generate_xx.py" to change the path to your original dataset folder.<br /> python distance_generate_xx.py

“xx” means the dataset name, including sh, jhu, qnrf, and nwpu.

Model

Download the pretrained model from Baidu-Disk, passward:wqf4; or Google-Drive

Quickly test

Edit "make_npydata.py" to change the path to your original dataset folder.<br /> Run python make_npydata.py

References

If you are interested in AutoScale, please cite our work:

@article{autoscale,
  title={AutoScale: Learning to Scale for Crowd Counting},
  author={Xu, Chenfeng and Liang, Dingkang and Xu, Yongchao and Bai, Song and Zhan, Wei and Tomizuka, Masayoshi and Bai, Xiang},
  journal={Int J Comput Vis},
  year={2022}
}

and

@inproceedings{xu2019learn,
  title={Learn to Scale: Generating Multipolar Normalized Density Maps for Crowd Counting},
  author={Xu, Chenfeng and Qiu, Kai and Fu, Jianlong and Bai, Song and Xu, Yongchao and Bai, Xiang},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={8382--8390},
  year={2019}
}