Home

Awesome

Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models

This codebase provides a Pytorch implementation for the paper "Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models".

Novel Task: Zero-Shot ID detection

ID_detection

Abstract

Extracting in-distribution (ID) images from noisy images scraped from the Internet is an important preprocessing for constructing datasets, which has traditionally been done manually. Automating this preprocessing with deep learning techniques presents two key challenges. First, images should be collected using only the name of the ID class without training on the ID data. Second, as we can see why COCO was created, it is crucial to identify images containing not only ID objects but also both ID and OOD objects as ID images to create robust recognizers. In this paper, we propose a novel problem setting called zero-shot in-distribution (ID) detection, where we identify images containing ID objects as ID images, even if they contain OOD objects, and images lacking ID objects as out-of-distribution (OOD) images without any training. To solve this problem, we present a simple and effective approach, Global-Local Maximum Concept Matching (GL-MCM), based on both global and local visual-text alignments of CLIP features. Extensive experiments demonstrate that GL-MCM outperforms comparison methods on both multi-object datasets and single-object ImageNet benchmarks.

Illustration

Global-Local Maximum Concept Matching (GL-MCM)

Arch_figure

Set up

Required Packages

We have done the codes with a single Nvidia A100 (or V100) GPU. We follow the environment in MCM.

Our experiments are conducted with Python 3.8 and Pytorch 1.10. Besides, the following commonly used packages are required to be installed:

$ pip install ftfy regex tqdm scipy matplotlib seaborn tqdm scikit-learn

Data Preparation

In-distribution Datasets

We use following datasets as ID:

We provide our curated ID and OOD datasets via this url.
For ImageNet-1k, we use the validation partion of the official provided dataset.
After downloads and, please set the datasetes to ./datasets

<!-- For other datasets, please download them via [this url](https://drive.google.com/file/d/1Wn5zGQQzadsvza86shO_ydpyCu5-k2eN/view?usp=share_link). After downloads, please set the datasetes to `./datasets` -->

Out-of-Distribution Datasets

We use the large-scale OOD datasets iNaturalist, SUN, Places, and Texture curated by Huang et al. 2021. We follow instruction from the this repository to download the subsampled datasets. For ImageNet-22K, we use this url in this repository curated by Wang et al. 2021

In addition, we also use ood_coco and ood_voc in this url.

The overall file structure is as follows:

GL-MCM
|-- datasets
    |-- ImageNet
    |-- ID_COCO_single
    |-- ID_VOC_single
    |-- ID_COCO_multi
    |-- OOD_COCO
    |-- OOD_VOC
    |-- iNaturalist
    |-- SUN
    |-- Places
    |-- Texture
    |-- ImageNet-22K
    ...

Quick Start

The main script for evaluating OOD detection performance is eval_id_detection.py. Here are the list of arguments:

The OOD detection results will be generated and stored in results/in_dataset/score/CLIP_ckpt_name/.

We provide bash scripts:

sh scripts/eval_coco_single.sh

Zero-shot OOD Detection

GL-MCM is originally proposed for the Zero-shot ID Detection, but it is also appliable for the Zero-shot OOD Detection.
To apply to the Zero-shot OOD Detection, we recommend to set the value of lambda_local to 0.5.

We provide bash scripts:

sh scripts/eval_imagenet_ood_detection.sh

The comparison results are as follows:

<table> <tr align="center"> <td rowspan="2">Methods</td> <td colspan="2">iNaturalist</td> <td colspan="2">SUN</td> <td colspan="2">Places</td> <td colspan="2">Textures</td> <td colspan="2">Avg</td> </tr> <tr align="center"> <td>FPR95</td> <td>AUROC</td> <td>FPR95</td> <td>AUROC</td> <td>FPR95</td> <td>AUROC</td> <td>FPR95</td> <td>AUROC</td> <td>FPR95</td> <td>AUROC</td> </tr> <tr align="center"> <td colspan="13">ViT-B-16</td> </tr> <tr align="center"> <td>GL_MCM (lambda=1.0)</td> <td>15.18</td> <td>96.71</td> <td>30.42</td> <td>93.09</td> <td>38.85</td> <td>89.90</td> <td>57.93</td> <td>83.63</td> <td>35.47</td> <td>90.83</td> </tr> <tr align="center"> <td>GL_MCM (lambda=0.5)</td> <td>17.46</td> <td>96.44</td> <td>30.73</td> <td>93.45</td> <td>37.65</td> <td>90.64</td> <td>55.23</td> <td>85.54</td> <td>35.27</td> <td>91.51</td> </tr> </table>

Acknowledgement

This code is based on the implementations of MCM

Citaiton

If you find our work interesting or use our code/models, please cite:

@article{miyai2023zero,
  title={Zero-Shot In-Distribution Detection in Multi-Object Settings Using Vision-Language Foundation Models},
  author={Miyai, Atsuyuki and Yu, Qing and Irie, Go and Aizawa, Kiyoharu},
  journal={arXiv preprint arXiv:2304.04521},
  year={2023}
}