Awesome
py-MDNet
by Hyeonseob Nam and Bohyung Han at POSTECH
Update (April, 2019)
- Migration to python 3.6 & pyTorch 1.0
- Efficiency improvement (~5fps)
- ImagNet-VID pretraining
- Code refactoring
Introduction
PyTorch implementation of MDNet, which runs at ~5fps with a single CPU core and a single GPU (GTX 1080 Ti).
[Project] [Paper] [Matlab code]
If you're using this code for your research, please cite:
@InProceedings{nam2016mdnet,
author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}
Results on OTB
- Raw results of MDNet pretrained on VOT-OTB (VOT13,14,15 excluding OTB): Google drive link
- Raw results of MDNet pretrained on Imagenet-VID: Google drive link
<img src="./figs/tb100-precision.png" width="400"> <img src="./figs/tb100-success.png" width="400"> <img src="./figs/tb50-precision.png" width="400"> <img src="./figs/tb50-success.png" width="400"> <img src="./figs/otb2013-precision.png" width="400"> <img src="./figs/otb2013-success.png" width="400">
Prerequisites
- python 3.6+
- opencv 3.0+
- PyTorch 1.0+ and its dependencies
- for GPU support: a GPU with ~3G memory
Usage
Tracking
python tracking/run_tracker.py -s DragonBaby [-d (display fig)] [-f (save fig)]
- You can provide a sequence configuration in two ways (see tracking/gen_config.py):
python tracking/run_tracker.py -s [seq name]
python tracking/run_tracker.py -j [json path]
Pretraining
- Download VGG-M (matconvnet model) and save as "models/imagenet-vgg-m.mat"
- Pretraining on VOT-OTB
- Download VOT datasets into "datasets/VOT/vot201x"
python pretrain/prepro_vot.py python pretrain/train_mdnet.py -d vot
- Pretraining on ImageNet-VID
- Download ImageNet-VID dataset into "datasets/ILSVRC"
python pretrain/prepro_imagenet.py python pretrain/train_mdnet.py -d imagenet