Home

Awesome

MOOD: Multi-level Out-of-distribution Detection

*** STL will be removed from figures later ***

This is a PyTorch implementation for detecting out-of-distribution examples in neural networks. The method is described in the paper MOOD: Multi-level Out-of-distribution Detection by Ziqian Lin*, Sreya Dutta Roy* and Yixuan Li (*Authors contributed equally.). We propose a novel framework, multi-level out-of-distribution detection (MOOD), which exploits intermediate classifier outputs for dynamic and efficient OOD inference, where easy OOD examples can be effectively detected early without propagating to deeper layers.

<p align="center"> <img src="./figs/architecture.png" width="600"> </p> The method achieves up to 71.05% computational reduction in inference, while maintaining competitive OOD detection performance. <p align="center"> <img src="./figs/10Results1.png" width="800"> </p> <p align="center"> <img src="./figs/10Results2.png" width="800"> </p>

Experimental Results

We used the deep learning model MSDNet with for our experiment. The PyTorch implementation of MSDNet is provided by Hao Li. The experimental results are shown as follows. The definition of each metric can be found in the paper. performance

Pre-trained Models

We provide two pre-trained neural networks: The two MSDNet networks trained on CIFAR-10 and CIFAR-100 respectively, please put the unzipped files in the folder '/trained_model'. The test accuracies are given by:

ArchitectureCIFAR-10CIFAR-100
MSDNet94.0975.43

Dataset

Description

We use CIFAR-10 and CIFAR-100 as in-distribution datasets, which are common benchmarks for OOD detection. For the OOD detection evaluation, we consider a total of 9 datasets with a diverse spectrum of image complexity. In order of increasing complexity, we use MNIST, K-MNIST, fashion-MNIST, LSUN (crop), SVHN, Textures, Places365, iSUN and LSUN (resize). All images are resized to 32×32 before feeding into the network. For each OOD dataset, we evaluate on the entire test split.

Downloading Out-of-Distribtion Datasets

We provide download links of 6 out-of-distributin datasets, please put the unzipped files in the folder '/data'. For the other 2 in-distribution datasets and 4 out-of-distribution datasets, the code will automatically download them since they are included in the torchvision.datasets.

Datasets                      | Download Through       
------------------------------|-----------------------
Cifar10                       | torchvision.datasets
CIfar100                      | torchvision.datasets
MNIST                	      | torchvision.datasets
K-MNIST                       | torchvision.datasets
fashion-MNIST                 | torchvision.datasets
LSUN (crop)                   | google drive
SVHN                          | google drive
Textures                      | google drive
Places365                     | google drive
isun                          | google drive
lsunR                         | google drive

Running the code

Dependencies

Running

Here is an example code reproducing the results of MOOD method, the MSDNet is trained on CIFAR-10 and out-of-distribution data includes 10 datasets. In the root directory, run

python main.py -ms energy -ml 5 -ma 1 -mc png

Note: Please choose arguments according to the following.

args

Outputs

Here is an example of output.


********** auroc result  cifar10  with  energy  **********
                         auroc                  fpr95    
OOD dataset      exit@last    MOOD      exit@last    MOOD
mnist             0.9903     0.9979      0.0413     0.0036
kmnist            0.9844     0.9986      0.0699     0.0033
fasionmnist       0.9923     0.9991      0.0248     0.0011
lsun              0.9873     0.9923      0.0591     0.0320
svhn              0.9282     0.9649      0.3409     0.1716
dtd               0.8229     0.8329      0.5537     0.5603
place365          0.8609     0.8674      0.4568     0.4687
isun              0.9384     0.9296      0.3179     0.3882
lsunR             0.9412     0.9325      0.2911     0.3616
average           0.9384     0.9461      0.2395     0.2212

For bibtex citation

@inproceedings{lin2021mood,
  author    = {Lin, Ziqian  and Roy, Sreya Dutta  and Li, Yixuan},
  title     = {MOOD: Multi-level Out-of-distribution Detection},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2021}
}