Awesome
Towards perspective-free object counting with deep learning
By Daniel Oñoro-Rubio and Roberto J. López-Sastre.
GRAM, University of Alcalá, Alcalá de Henares, Spain.
This is the official code repository of the work described in our ECCV 2016 paper.
This repository provides the implementation of CCNN and Hydra models for object counting.
Cite us
Was our code useful for you? Please cite us:
@inproceedings{onoro2016,
Author = {O\~noro-Rubio, D. and L\'opez-Sastre, R.~J.},
Title = {Towards perspective-free object counting with deep learning},
Booktitle = {ECCV},
Year = {2016}
}
License
The license information of this project is described in the file "LICENSE.txt".
Contents
- Requirements: software
- Requirements: hardware
- Basic installation
- Demo
- How to reproduce the results of the paper
- Remarks
- Acknowledgements
Requirements: software
-
Use a Linux distribution. We have developed and tested the code on Ubuntu.
-
Requirements for
Caffe
andpycaffe
. Follow the Caffe installation instructions.
Note: Caffe must be built with support for Python layers!
# In your Makefile.config, make sure to have this line uncommented
WITH_PYTHON_LAYER := 1
- Python packages you need:
cython
,python-opencv
,python-h5py
,easydict
,pillow (version >= 3.4.2)
.
Requirements: hardware
This code allows the usage of CPU and GPU, but we strongly recommend the usage of GPU.
-
For training, we recommend using a GPU with at least 3GB of memory.
-
For testing, a GPU with 2GB of memory is enough.
Basic installation (sufficient for the demo)
-
Be sure you have added to your
PATH
thetools
directory of yourCaffe
installation:export PATH=<your_caffe_root_path>/build/tools:$PATH
-
Be sure you have added your
pycaffe
compilation into yourPYTHONPATH
:export PYTHONPATH=<your_caffe_root_path>/python:$PYTHONPATH
Demo
Here, we provide a demo for predicting the number of vehicles in the test images of the TRANCOS dataset, which was used in our ECCV paper.
This demo uses the CCNN model described in the paper. The results reported in the paper can be reproduced with this demo.
To run the demo, these are the steps to follow:
-
Download the TRANCOS dataset and extract it in the path
data/TRANCOS
. -
Download our TRANCOS CCNN pretrained model. Follow the instructions detailed here
-
Finally, to run the demo, simply execute the following command:
./tools/demo.sh
How to reproduce the results of the paper?
We provide the scripts needed to train and test our models (CCNN and Hydra) on the datasets used in our ECCV paper. These are the steps to follow:
Download a dataset
To download and set up a dataset we recommend following these instructions:
-
TRANCOS dataset: Download it using this direct link, and extract the file in the path
data/TRANCOS
. -
UCSD dataset: just place yourself in the $PROJECT directory and run the following script
./tools/get_ucsd.sh
-
UCF dataset: just place yourself in the $PROJECT directory and run the following script
./tools/get_ucf.sh
Note: Make sure the folder "data/" does not already contain the dataset.
Download pre-trained models
All our pre-trained models can be downloaded following these instructions:
Test the pretrained models
-
Edit the corresponding script $PROJECT/experiments/scripts/DATASET_CHOSEN_test_pretrained.sh
-
Run the corresponding scripts.
./experiments/scripts/DATASET_CHOSEN_test_pretrained.sh
Note that the pre-trained models will let you reproduce the results in our paper.
Train/test the model chosen
-
Edit the launching script (e.g.: $PROJECT/experiments/scripts/DATASET_CHOSEN_train_test.sh).
-
Place you in $PROJECT folder and run the launching script by typing:
./experiments/scripts/DATASET_CHOSEN_train_test.sh
Remarks
To provide a better distribution, this repository unifies and reimplements in Python some of the original modules. Due to these changes in the libraries used, the results produced by this software might be slightly different from the ones reported in the paper.
Acknowledgements
This work is supported by the projects of the DGT with references SPIP2014-1468 and SPIP2015-01809, and the project of the MINECO TEC2013-45183-R.