Home

Awesome

PackNet: https://arxiv.org/abs/1711.05769

Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt
Datasets in PyTorch format are available here: https://uofi.box.com/s/ixncr3d85guosajywhf7yridszzg5zsq
The PyTorch-friendly Places365 dataset can be downloaded from http://places2.csail.mit.edu/download.html
Place models in checkpoints/ and unzipped datasets in data/

VGG-16 LwFVGG-16VGG-16 BNResNet-50DenseNet-121
ImageNet36.58 (14.75)29.19 (9.90)27.10 (8.70)24.33 (7.17)25.51 (7.85)
CUBS34.2422.5620.4319.5920.11
Stanford Cars22.0717.0914.9214.0316.18
Flowers12.1511.078.598.129.07

Note that the numbers in the paper are averaged over multiple runs for each ordering of datasets. The pretrained models are for a specific dataset addition ordering: (c) CUBS Birds, (s) Stanford Cars, (f) Flowers.

These numbers were obtained by evaluating the models on a Titan X (Pascal).
Note that numbers on other GPUs might be slightly different (~0.1%) owing to cudnn algorithm selection.
https://discuss.pytorch.org/t/slightly-different-results-on-k-40-v-s-titan-x/10064

Requirements:

Python 2.7 or 3.xx
torch==0.2.0.post3
torchvision==0.1.9
torchnet (pip install git+https://github.com/pytorch/tnt.git@master)
tqdm (pip install tqdm)

Training:

Check out the scripts in src/scripts.
Run all code from the src/ directory, e.g. ./scripts/run_all.sh

Eval:

cd src  # Run everything from src/

# Pruning-based models.
python main.py --mode eval --dataset cubs_cropped \
  --loadname ../checkpoints/csf_0.75,0.75,-1_vgg16_0.5-nobias-nobn_1.pt

# LwF models.
python lwf.py --mode eval --dataset cubs_cropped \
  --loadname ../checkpoints/csf_lwf.pt