Home

Awesome

Network Dissection

News (Jan.18, 2018)!

We release a light and portable version of Network Dissection in pyTorch at NetDissect-Lite. It is much faster than this first version and the code structure is cleaned up, without any complex shell commands. It takes about 30 min for a resnet18 model and 2 hours for a densenet161. If you have questions, please open issues at NetDissect-Lite

Introduction

This repository contains the demo code for the CVPR'17 paper Network Dissection: Quantifying Interpretability of Deep Visual Representations. You can use this code with naive Caffe, with matcaffe and pycaffe compiled. We also provide a PyTorch wrapper to apply NetDissect to probe networks in PyTorch format. There are dissection results for several networks at the project page.

This code includes

Download

    https://github.com/CSAILVision/NetDissect.git
    cd NetDissect
    script/dlbroden_227.sh
    script/dlzoo_example.sh

Note that you can run script/dlbroden.sh to download Broden dataset with images in all three resolution (227x227,224x224,384x384), or run script/dlzoo.sh to download more CNN models. AlexNet models work with 227x227 image input, while VGG, ResNet, GoogLeNet works with 224x224 image input.

Run in Caffe

    script/rundissect.sh --model caffe_reference_places365 --layers "conv5" --dataset dataset/broden1_227 --resolution 227
    script/rundissect.sh --model caffe_reference_imagenet --layers "conv3 conv4 conv5" --dataset dataset/broden1_227 --resolution 227

Run in PyTorch

    script/rundissect_pytorch.sh
    script/rundissect_pytorch_external.sh

Report

    dissection/caffe_reference_places365/html/conv5.html
    dissection/caffe_reference_places365/html/image/conv5-bargraph.svg
    dissection/caffe_reference_places365/html/image/conv5-0[###].png    
    dissection/caffe_reference_places365/conv5-result.csv

These are, respectively, the HTML-formatted report, the semantics of the units of the layer summarized as a bar graph, visualizations of all the units of the layer (using zero-indexed unit numbers), and a CSV file containing raw scores of the top matching semantic concepts in each category for each unit of the layer.

Reference

If you find the codes useful, please cite this paper

@inproceedings{netdissect2017,
  title={Network Dissection: Quantifying Interpretability of Deep Visual Representations},
  author={Bau, David and Zhou, Bolei and Khosla, Aditya and Oliva, Aude and Torralba, Antonio},
  booktitle={Computer Vision and Pattern Recognition},
  year={2017}
}