Home

Awesome

Compacting, Picking and Growing (CPG)

This is an official Pytorch implementation of CPG - a lifelong learning algorithm for object classification. For details about CPG please refer to the paper Compacting, Picking and Growing for Unforgetting Continual Learning (Slides,Poster)

Citing Paper

Please cite following paper if these codes help your research:

@inproceedings{hung2019compacting,
title={Compacting, Picking and Growing for Unforgetting Continual Learning},
author={Hung, Ching-Yi and Tu, Cheng-Hao and Wu, Cheng-En and Chen, Chien-Hung and Chan, Yi-Ming and Chen, Chu-Song},
booktitle={Advances in Neural Information Processing Systems},
pages={13647--13657},
year={2019}
}

Dependencies

Python>=3.6
PyTorch>=1.0
tqdm

Experiment1 (Compact 20 tasks into VGG16 network)

Demo

Step 1. Download Cifar100 and organize the folders to Pytorch data loading format

$ cifar2png cifar100superclass path/to/cifar100png

Step 2. Fill the dataset path into dataset.py

    train_dataset = datasets.ImageFolder('path_to_train_folder/{}'.format(dataset_name),
            train_transform)
val_dataset = \
        datasets.ImageFolder('path_to_test_folder/{}'.format(
                dataset_name),
                transforms.Compose([
                    transforms.ToTensor(),
                    normalize,
                ]))

Step 3. Run baseline_cifar100.sh or finetune_cifar100_normal.sh in experiement1 folder

There are one important arg in baseline_cifar100.sh or finetune_cifar100_normal.sh:

$ bash experiment1/baseline_cifar100.sh

or

$ bash experiment1/finetune_cifar100_normal.sh

After this step, we will get the accuracy goal for each task in the json file we specify.

Step 4. Run CPG algorithm to learn 20 tasks in sequential

$ bash experiment1/CPG_cifar100_scratch_mul_1.5.sh

Step 5. Inference

$ bash experiment1/inference_CPG_cifar100_result.sh

CPG-VGG16 Checkpoints on CIFAR-100 Twenty Tasks.

unzip "experiment1_ckeckpoints.zip" in the "checkpoints" folder.

Task1234567891011121314151617181920
Acc.67.078.277.079.485.083.880.082.881.687.686.882.687.681.451.871.269.470.286.692.0

Experiment2 (Compact 6 tasks into VGG16/ResNet50 network)

Step 1. Download multiple datasets

Step 2. Adjust set_dataset_paths function in utils.py

Step 3. Manually write each task's accuracy goal into json file

For example, for VGG16, I will create logs/baseline_imagenet_acc_custom_vgg.txt

, and write accuracy goal according to piggyback paper.

{"imagenet": "0.7336", "cubs_cropped": "0.8023", "stanford_cars_cropped": "0.9059", "flowers": "0.9545", "wikiart": "0.7332", "sketches": "0.7808"}

Step 4. Run CPG_imagenet_vgg.sh each time for each task and manually choose the best pruning ratio

For example, I will first gradually prune the first task (ImageNet) by setting

line 49,

In CPG_imagenet_vgg.sh

line 49: for task_id in `seq 1 1`; do

After gradually pruning, I will check the record.txt file in the checkpoint path, in my case, it is checkpoints/CPG/custom_vgg/imagenet/gradual_prune/record.txt and copy the checkpoint file from appropriate pruning ratio folder to gradual_prune folder.

for example, original there will be folders named 0.1, 0.2, 0.3 ... 0.95 in checkpoint/CPG/custom_vgg/imagenet/gradual_prune/ folder, and according to the record.txt inside it, I fould that 0.4 is the best pruning ratio (It might be true, if 0.4 has the checkpoint which accuracy > 73.36, but 0.5 has the checkpoint which accuracy < 73.36)

Thus, I copied the checkpoint from 0.4 folder to its upper folder

In checkpoint/CPG/custom_vgg/imagenet/gradual_prune/

$ cp 0.4/checkpoint-46.pth.tar ./checkpoint-46.pth.tar

Now it's time to add second task by change line 49 in CPG_imagenet_vgg.sh line 49,

In CPG_imagenet_vgg.sh

line 49: for task_id in `seq 2 2`; do

Then we repeat the check procedure to the second task, we check checkpoints/CPG/custom_vgg/cubs_cropped/gradual_prune/record.txt and copy the appropriate checkpoint with best pruning ratio to the upper folder, and again to the third, fourth tasks, ...

CPG-ResNet50 Checkpoints on Fine-grained Dataset.

unzip "experiment2_ResNet50_ckeckpoints.zip" in the "checkpoints" folder.

TaskImageNetCUBSStanford CarsFlowersWikiartSketch
Acc.75.8183.5992.8096.6277.1580.33

Benchmarking

Cifar100 20 Tasks (datsets as experiment1 above) - VGG16

Methods1234567891011121314151617181920Avg.
PackNet66.480.076.278.480.079.867.861.468.877.279.059.466.457.236.054.251.658.867.883.267.5
*PAE67.277.078.676.084.481.277.680.080.487.885.477.879.479.651.268.468.668.683.288.877.1
CPG65.276.679.881.486.684.883.485.084.289.290.882.485.685.253.284.470.073.488.894.880.9

*PAE is our previous work.

Fine-grained 6 Tasks (datsets as experiment2 above) - ResNet50

MethodsImageNetCUBSStanford CarsFlowersWikiartSketchModel Size (MB)
Train from Scratch76.1640.9661.5659.7356.5075.40554
Finetune-82.8391.8396.5675.6080.78551
ProgressiveNet76.1678.9489.2193.4174.9476.35563
PackNet75.7180.4186.1193.0469.4076.17115
Piggyback76.1684.5989.6294.7771.3379.91121
CPG75.8183.5992.8096.6277.1580.33121