Home

Awesome

Lightweight-Deep-CNN-for-Natural-Image-Matting-via-Similarity-Preserving-Knowledge-Distillation

Introduction

Accepted at IEEE Signal Processing Letters 2020

<p align="center"> <img src="images/Origin.png" width="350" title="Original Image"/> <img src="images/matting.png" width="350" title="spatial+channel method"/> </p>

Official implementation of the paper "Lightweight Deep CNN for Natural Image Matting via Similarity Preserving Knowledge Distillation" [paper]

Donggeun Yoon, Jinsun Park, Donghyeon Cho

Requirement

Performace

note

Here is the results of DIM-student with and without knowledge distillation on the Adobe Image Matting Dataset:

MethodsSADMSEGradConn
without KD121.770.05875.36129.55
batch similarity124.430.05574.36132.25
spatial similarity95.400.03954.71100.92
channel similarity94.760.03856.36100.36
spatial+channel84.370.03447.6389.35
batch+spatial+channel91.300.03756.2097.20

Dataset

  1. Please contact authors requesting for the Adobe Image Matting dataset.
  2. Download images from the COCO and Pascal VOC datasets in folder data and Run the following command to composite images.
$ python pre_process.py
  1. Run the following command to seperate the composited datasets with training set and valid set.
$ python data_gen.py

Training

Download pretrained teacher model before train and place in folder pretrained. Run the following command to train with batch, spatial, channel similarity preserving knowledge distillation.

$ python train.py --batch-size 16 --KD_type batch,spatial,channel --feature_layer [1,2,3,4] --KD_weight [1,1,1]

Testing

Run the following command to evaluate BEST_checkpoint.tar.

$ python test.py

Acknowledgement

The code is built on Deep image matting (pytorch). Thanks to authors for sharing the codes.

Citation

@ARTICLE{9269400,
  author={D. {Yoon} and J. {Park} and D. {Cho}},
  journal={IEEE Signal Processing Letters}, 
  title={Lightweight Deep CNN for Natural Image Matting via Similarity-Preserving Knowledge Distillation}, 
  year={2020}
}