Home

Awesome

Neural Rejuvenation @ CVPR19

Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization
Siyuan Qiao, Zhe Lin, Jianming Zhang, Alan Yuille
In Conference on Computer Vision and Pattern Recognition, 2019 Oral Presentation

@inproceedings{NR,
   title = {Neural Rejuvenation: Improving Deep Network Training by
   Enhancing Computational Resource Utilization},
   author = {Qiao, Siyuan and Lin, Zhe and Zhang, Jianming and Yuille, Alan},
   booktitle = {CVPR},
   year = {2019}
}

Neural Rejuvenation is a training method for deep neural networks that focuses on improving the computation resource utilization. Deep neural networks are usually over-parameterized for their tasks in order to achieve good performances, thus are likely to have underutilized computational resources. As models with higher computational costs (e.g. more parameters or more computations) usually have better performances, we study the problem of improving the resource utilization of neural networks so that their potentials can be further realized. To this end, we propose a novel optimization method named Neural Rejuvenation. As its name suggests, our method detects dead neurons and computes resource utilization in real time, rejuvenates dead neurons by resource reallocation and reinitialization, and trains them with new training schemes.

Training

The code was implemented and tested with PyTorch 0.4.1.post2. If you are using other versions, please be aware that there might be some incompatibility issues. The code is based on pytorch-classification by Wei Yang.

CIFAR

VGG19 (BN)

python cifar.py -d cifar10 -a vgg19_bn --epochs 300 --schedule 150 225 --gamma 0.1 --checkpoint checkpoints/cifar10/vgg19_bn --nr-target 0.25 --nr-sparsity 1.5e-4

ResNet-164

python cifar.py -d cifar10 -a resnet --depth 110 --epochs 300 --schedule 150 225 --gamma 0.1 --wd 1e-4 --checkpoint checkpoints/cifar10/resnet-164 --nr-target 0.25 --nr-sparsity 1.5e-4

DenseNet-BC

python cifar.py -d cifar10 -a densenet --depth 100 --growthRate 40 --train-batch 64 --epochs 300 --schedule 150 225 --wd 1e-4 --gamma 0.1 --checkpoint checkpoints/cifar10/densenet-bc-100-12 --nr-target 0.25 --nr-sparsity 1.0e-4

ImageNet

VGG16 (BN)

python imagenet.py -a vgg_nr --data ~/dataset/ILSVRC2012/ --epochs 100 --schedule 30 60 90 --gamma 0.1 -c checkpoints/imagenet/vgg16 --nr-target 0.5 --train-batch 128 --test-batch 100 --nr-compress-only --gpu-id 0,1,2,3 --image-size 224 -j 20
python imagenet.py -a vgg_nr --data ~/dataset/ILSVRC2012/ --epochs 100 --schedule 30 60 90 --gamma 0.1 -c checkpoints/imagenet/vgg16 --nr-bn-target 0.5 --train-batch 128 --test-batch 100 --resume checkpoints/imagenet/vgg16/NR_vgg16_nr_0.5.pth.tar --gpu-id 0,1,2,3 --image-size 224 -j 20

Note that without --nr-compress-only, the program will automatically go to the second line. Writing it as two steps makes debug easier.

Experimental Results

CIFAR

Model# ParamsDatasetnr_sparsityErr
VGG-199.99MCIFAR-101.5e-44.19
VGG-1910.04MCIFAR-1003e-421.53
ResNet-1640.88MCIFAR-101.50e-45.13
ResNet-1640.92MCIFAR-1002.50e-423.84
DenseNet-100-404.12MCIFAR-101.00e-43.40
DenseNet-100-404.31MCIFAR-1002.00e-418.59

ImageNet

Model# ParamsFLOPsTop-1Top-5
DenseNet-1218.22M3.13G24.507.49
VGG-1636.4M23.5G23.116.69
ResNet-1811.9M2.16G28.869.93
ResNet-3421.9M3.77G25.778.10
ResNet-5026.4M3.90G22.936.47
ResNet-10146.6M6.96G21.225.76