Awesome
Continual Learning for Visual Search with Backward Consistent Feature Embedding
PyTorch implementation for CVS (CVPR 2022).
Timmy S. T. Wan, Jun-Cheng Chen, Tzer-Yi Wu, Chu-Song Chen
Abstract
In visual search, the gallery set could be incrementally growing and added to the database in practice. However, existing methods rely on the model trained on the entire dataset, ignoring the continual updating of the model. Besides, as the model updates, the new model must re-extract features for the entire gallery set to maintain compatible feature space, imposing a high computational cost for a large gallery set. To address the issues of long-term visual search, we introduce a continual learning (CL) approach that can handle the incrementally growing gallery set with backward embedding consistency. We enforce the losses of inter-session data coherence, neighbor-session model coherence, and intra-session discrimination to conduct a continual learner. In addition to the disjoint setup, our CL solution also tackles the situation of increasingly adding new classes for the blurry boundary without assuming all categories known in the beginning and during model update. To our knowledge, this is the first CL method both tackling the issue of backward-consistent feature embedding and allowing novel classes to occur in the new sessions. Extensive experiments on various benchmarks show the efficacy of our approach under a wide range of setups.
Install
conda create -n cvs python=3.6.13 -y
conda activate cvs
conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.2 -c pytorch -y
conda install numpy=1.19.2 -y
conda install python-dateutil -y
conda install -c conda-forge pretrainedmodels=0.7.4 -y
conda install pandas=1.0.4 -y
conda install -c conda-forge scikit-learn=0.23.0 -y
pip install randaugment==1.0.2
pip install easydict==1.9
conda install -c pytorch faiss-gpu=1.7.0 -y
pip install matplotlib
Dataset
Please save datasets in dataset
folder. Each dataset format is the same as Rainbow's.
- Unzip the CIFAR100 and rename the extracted folder to
cifar100
- Unzip the Tiny ImageNet and run
dataset/build_tinyimagenet.py
to convert the format. Then, put train-val-test files(here) intocollections
. - Unzip the Product-10K dataset (train.zip) and put train-val-test files(here) into
collections
. Then, rundataset/build_productm.py
to generate the Proudct-M. - For Stanford Dog and iNat-M, please download the files (Stanford Dog / iNatualist 2017). You can generate the same setting as the paper mentioned with slightly modification to
dataset/build_productm.py
.
Train
Lower Bound: Finetune the model directly.
# For cifar100
bash scripts/cifar.sh -s blurry10 # change it depends on the experimental setup.
# E.g. bash scripts/cifar.sh -s general10
# E.g. bash scripts/cifar.sh -s disjoint
# For Tiny Imagenet
bash scripts/tinyimagenet.sh -s blurry30 # Options: [blurry30, general30, disjoint]
# E.g. bash tinyimagenet.sh -s general30
The result of the first session will be in exp/[DATASET]_[SETUP]/result.json
For the later session j, the result will be in exp/[DATASET]_[SETUP]_[METHOD]$(j-1)/result.json
Upper Bound: Perform joint training while allowing re-extraction.
bash scripts/cifar.sh -s blurry10 -m jt # For cifar100 under blurry setup
bash scripts/tinyimagenet.sh -s blurry30 -m jt # For Tiny ImageNet under blurry setup
CVS
bash scripts/cifar.sh -s blurry10 -m cvs # For cifar100 under blurry setup
bash scripts/cifar.sh -s general10 -m cvs # For cifar100 under general setup
bash scripts/cifar.sh -s disjoint -m cvs # For cifar100 under disjoint setup
bash scripts/tinyimagenet.sh -s blurry30 -m cvs # For Tiny ImageNet under blurry setup
bash scripts/tinyimagenet.sh -s general30 -m cvs # For Tiny ImageNet under general setup
bash scripts/tinyimagenet.sh -s disjoint -m cvs # For Tiny ImageNet under disjoint setup
MMD
bash scripts/cifar.sh -s blurry10 -m mmd # For cifar100
BCT
bash scripts/cifar.sh -s blurry10 -m bct # For cifar100
LWF
bash scripts/cifar.sh -s blurry10 -m lwf # For cifar100
Test
Download the checkpoint folder here and put it in the same path as test.py
. Then run bash test_CVS.sh
.
Citation
If you think this implementation is useful for your work, please cite our paper:
@InProceedings{Wan_2022_CVPR,
author = {Wan, Timmy S. T. and Chen, Jun-Cheng and Wu, Tzer-Yi and Chen, Chu-Song},
title = {Continual Learning for Visual Search With Backward Consistent Feature Embedding},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {16702-16711}
}