Home

Awesome

Proxy Anchor Loss for Deep Metric Learning

Official PyTorch implementation of CVPR 2020 paper Proxy Anchor Loss for Deep Metric Learning.

A standard embedding network trained with Proxy-Anchor Loss achieves SOTA performance and most quickly converges.

This repository provides source code of experiments on four datasets (CUB-200-2011, Cars-196, Stanford Online Products and In-shop) and pretrained models.

Accuracy in Recall@1 versus training time on the Cars-196

<p align="left"><img src="misc/Recall_Trainingtime.jpg" alt="graph" width="55%"></p>

Requirements

Datasets

  1. Download four public benchmarks for deep metric learning

  2. Extract the tgz or zip file into ./data/ (Exceptionally, for Cars-196, put the files in a ./data/cars196)

[Notice!] I found that the link that was previously uploaded for the CUB dataset was incorrect, so I corrected the link. (CUB-200 -> CUB-200-2011) If you have previously downloaded the CUB dataset from my repository, please download it again. Thanks to myeongjun for reporting this issue!

Training Embedding Network

Note that a sufficiently large batch size and good parameters resulted in better overall performance than that described in the paper. You can download the trained model through the hyperlink in the table.

CUB-200-2011

python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model bn_inception \
                --embedding-size 512 \
                --batch-size 180 \
                --lr 1e-4 \
                --dataset cub \
                --warm 1 \
                --bn-freeze 1 \
                --lr-decay-step 10
python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model resnet50 \
                --embedding-size 512 \
                --batch-size 120 \
                --lr 1e-4 \
                --dataset cub \
                --warm 5 \
                --bn-freeze 1 \
                --lr-decay-step 5
MethodBackboneR@1R@2R@4R@8
Proxy-Anchor<sup>512</sup>Inception-BN69.178.986.191.2
Proxy-Anchor<sup>512</sup>ResNet-5069.979.686.691.4

Cars-196

python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model bn_inception \
                --embedding-size 512 \
                --batch-size 180 \
                --lr 1e-4 \
                --dataset cars \
                --warm 1 \
                --bn-freeze 1 \
                --lr-decay-step 20
python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model resnet50 \
                --embedding-size 512 \
                --batch-size 120 \
                --lr 1e-4 \
                --dataset cars \
                --warm 5 \
                --bn-freeze 1 \
                --lr-decay-step 10 
MethodBackboneR@1R@2R@4R@8
Proxy-Anchor<sup>512</sup>Inception-BN86.491.995.097.0
Proxy-Anchor<sup>512</sup>ResNet-5087.792.795.597.3

Stanford Online Products

python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model bn_inception \
                --embedding-size 512 \
                --batch-size 180 \
                --lr 6e-4 \
                --dataset SOP \
                --warm 1 \
                --bn-freeze 0 \
                --lr-decay-step 20 \
                --lr-decay-gamma 0.25
MethodBackboneR@1R@10R@100R@1000
Proxy-Anchor<sup>512</sup>Inception-BN79.290.796.298.6

In-Shop Clothes Retrieval

python train.py --gpu-id 0 \
                --loss Proxy_Anchor \
                --model bn_inception \
                --embedding-size 512 \
                --batch-size 180 \
                --lr 6e-4 \
                --dataset Inshop \
                --warm 1 \
                --bn-freeze 0 \
                --lr-decay-step 20 \
                --lr-decay-gamma 0.25
MethodBackboneR@1R@10R@20R@30R@40
Proxy-Anchor<sup>512</sup>Inception-BN91.998.198.799.099.1

Evaluating Image Retrieval

Follow the below steps to evaluate the provided pretrained model or your trained model.

Trained best model will be saved in the ./logs/folder_name.

# The parameters should be changed according to the model to be evaluated.
python evaluate.py --gpu-id 0 \
                   --batch-size 120 \
                   --model bn_inception \
                   --embedding-size 512 \
                   --dataset cub \
                   --resume /set/your/model/path/best_model.pth

Acknowledgements

Our code is modified and adapted on these great repositories:

Other Implementations

Thanks Geonmo and nixingyang for the good implementation :D

New Method for Further Improvement

Recently, our paper Embedding Transfer with Label Relaxation for Improved Metric Learning which presents the new knowledge distillation method for metric learning is accepted and will be presented at CVPR21. Our new method can greatly improve the performance, or reduce sizes and output dimensions of embedding networks with negligible performance degradation. If you are also interested in new knowlege distillation method for metric learning, please check the following arxiv and repository links. The new repository has been refactored based on the Proxy-Anchor Loss implementation, so those who have used this repository will be able to use new code easily. :D

Citation

If you use this method or this code in your research, please cite as:

@InProceedings{Kim_2020_CVPR,
  author = {Kim, Sungyeon and Kim, Dongwon and Cho, Minsu and Kwak, Suha},
  title = {Proxy Anchor Loss for Deep Metric Learning},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2020}
}