Awesome
Towards OOD detection in fine-grained environments
This repository contains the code necessary to replicate the results of our WACV'23 paper:
Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-grained Environments
Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li @ Duke CEI Lab
Check out our paper here!!!
Overview
Motivation
<p align="center"> <img src='/figures/overview.png' width='870'> </p> <p> <em>Left: A comparison of OOD detection in coarse- and fine-grained environments. Intuitively, fine-grained detection is much more challenging. Right: A *conceptual* illustration of MixOE. A Standard model with no OOD considerations tends to be over-confident on OOD samples. OE is able to calibrate the prediction confidence on coarse-grained OOD, but the outputs on fine-grained OOD are uncontrolled (marked by ``?''). MixOE aims for a smooth decay of the confidence as the inputs transition from ID to OOD, and thus enables detection of both coarse/fine-grained OOD.</em> </p>The capability of detecting Out-of-distribution (OOD) samples that do not belong to one of the known classes of DNNs during inference time is crucial for reliable operations in the wild. Existing works typically use coarse-grained benchmarks (e.g., CIFAR-10 v.s. SVHN/LSUN) to perform evaluation, which fail to approximate many real-world scenarios which inherently have fine-grained attributes (e.g., bird species recognition, medical image classification). In such fine-grained environments, one may expect OOD samples to be highly granular w.r.t. in-distribution (ID) data, which intuitively can be very difficult to identify. Unfortunately, OOD detection in fine-grained environments remains largely underexplored.
Our contributions
- We construct four large-scale, fine-grained test environments for OOD detection. The test benches are generated using a holdout-class method on public fine-grained classification datasets. Later we have detailed instructions for you to prepare the datasets and easily reproduce our test environments.
- Through initial evaluation, we find that existing methods struggle to detect fine-grained novelties (underperforming a simple baseline MSP). Even the methods that explicitly incorporate auxiliary outlier data (e.g., Outlier Exposure) does not help much. We further conduct analysis to explain why this is the case.
- We propose Mixture Outlier Exposure (MixOE), a novel method for OOD detection in fine-grained environments. MixOE consistently improves the detection rates against both fine- and coarse-grained OOD samples across all the four test benches.
See our paper for details!
Get started
Environment
Follow the commands below to set up the environment.
-
Clone the repo:
git clone https://github.com/zjysteven/MixOE.git
-
Create a conda environment
conda create -n mixoe python=3.8
conda activate mixoe
python -m pip install -r requirements.txt
Dataset
Please see detailed instructions here.
Reproducing experiments
We provide sample scripts train.sh
and eval.sh
in scripts/
folder showcasing how to train/evaluate our method and other considered methods. Note that all OE-based methods (including MixOE) fine-tune the trained baseline model, so to run OE/MixOE you first need to do standard training with train_baseline.py
. For everyone's convenience, we have uploaded the trained baseline weights here.
Reference
If you find our work/code helpful, please consider citing our work.
@InProceedings{Zhang_2023_WACV,
author = {Zhang, Jingyang and Inkawhich, Nathan and Linderman, Randolph and Chen, Yiran and Li, Hai},
title = {Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-Grained Environments},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {5531-5540}
}