Home

Awesome

Compositional Experts (ComEx) for GNCD

Code for the CVPR 2022 paper:

Title: Divide and Conquer: Compositional Experts for Generalized Novel Class Discovery<br> Authors: Muli Yang, Yuehua Zhu, Jiaping Yu, Aming Wu, and Cheng Deng<br> Paper: CVF Open Access

Introduction

Abstract: In response to the explosively-increasing requirement of annotated data, Novel Class Discovery (NCD) has emerged as a promising alternative to automatically recognize unknown classes without any annotation. To this end, a model makes use of a base set to learn basic semantic discriminability that can be transferred to recognize novel classes. Most existing works handle the base and novel sets using separate objectives within a two-stage training paradigm. Despite showing competitive performance on novel classes, they fail to generalize to recognizing samples from both base and novel sets. In this paper, we focus on this generalized setting of NCD (GNCD), and propose to divide and conquer it with two groups of Compositional Experts (ComEx). Each group of experts is designed to characterize the whole dataset in a comprehensive yet complementary fashion. With their union, we can solve GNCD in an efficient end-to-end manner. We further look into the drawback in current NCD methods, and propose to strengthen ComEx with global-to-local and local-to-local regularization. ComEx is evaluated on four popular benchmarks, showing clear superiority towards the goal of GNCD. <br>

<p align="center"> <img src="./assets/comex-teaser.png"/ width=100%> <br /> <em> The task setting of NCD, and the conceptual motivation of our ComEx. </em> </p>

Setup

Commands

Recall that there is a supervised pretraining stage before the discovery phase. If you would like to train your model from scratch, please run the following lines to get a pretrained checkpoint on base classes. Note that your checkpoint will be located in the checkpoints folder by default.

CUDA_VISIBLE_DEVICES=0 python main_pretrain.py \
 --dataset CIFAR10 \
 --data_dir PATH/TO/DATASET \
 --gpus 1 \
 --precision 16 \
 --max_epochs 200 \
 --num_base_classes 5 \
 --num_novel_classes 5 \
 --comment 5_5

After that, you can start novel class discovery using the pretrained checkpoint (or directly using the above provided checkpoint) by specifying --pretrained with your checkpoint path:

CUDA_VISIBLE_DEVICES=0 python main_discover.py \
 --dataset CIFAR10 \
 --data_dir PATH/TO/DATASET \
 --gpus 1 \
 --precision 16 \
 --max_epochs 200 \
 --num_base_classes 5 \
 --num_novel_classes 5 \
 --queue_size 500 \
 --sharp 0.5 \
 --batch_head \
 --batch_head_multi_novel \
 --batch_head_reg 1.0 \
 --pretrained PATH/TO/CHECKPOINTS/pretrain-resnet18-CIFAR10.cp \
 --comment 5_5

Tips for running on different datasets

When running discovery:

For running on ImageNet, please use the following commands:

CUDA_VISIBLE_DEVICES=0 python main_pretrain.py \
 --dataset ImageNet \
 --data_dir PATH/TO/IMAGENET \
 --gpus 1 \
 --precision 16 \
 --max_epochs 100 \
 --warmup_epochs 5 \
 --num_base_classes 882 \
 --num_novel_classes 30 \
 --comment 882_30
CUDA_VISIBLE_DEVICES=0 python main_discover.py \
 --dataset ImageNet \
 --data_dir PATH/TO/IMAGENET \
 --imagenet_split A \
 --gpus 1 \
 --precision 16  \
 --max_epochs 60 \
 --warmup_epochs 5 \
 --num_base_classes 882 \
 --num_novel_classes 30 \
 --queue_size 500 \
 --sharp 0.5 \
 --pretrained PATH/TO/CHECKPOINTS/pretrain-resnet18-ImageNet.cp \
 --comment 882_30-A

TODO

Acknowledgements

Our work is inspired from many recent efforts in various fields. They are

Many thanks for their great work!

Citations

If you find our work helpful, please consider citing our paper:

@InProceedings{yang2022divide,
    author    = {Yang, Muli and Zhu, Yuehua and Yu, Jiaping and Wu, Aming and Deng, Cheng},
    title     = {Divide and Conquer: Compositional Experts for Generalized Novel Class Discovery},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {14268-14277}
}

If you use our code, please also consider citing the paper of UNO:

@InProceedings{Fini_2021_ICCV,
    author    = {Fini, Enrico and Sangineto, Enver and Lathuili\`ere, St\'ephane and Zhong, Zhun and Nabi, Moin and Ricci, Elisa},
    title     = {A Unified Objective for Novel Class Discovery},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {9284-9292}
}