Home

Awesome

MMAL-Net

This is a PyTorch implementation of the paper "Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization (MMAL-Net)" (Fan Zhang, Meng Li, Guisheng Zhai, Yizhao Liu), and the paper has been accepted by the 27th International Conference on Multimedia Modeling (MMM2021). Welcome to discuss with us in issues!

avatar

Table of Contents

Requirements

Datasets

Download the CUB-200-2011 datasets and copy the contents of the extracted images folder into datasets/CUB 200-2011/images.

Download the FGVC-Aircraft datasets and copy the contents of the extracted data/images folder into datasets/FGVC_Aircraft/data/images)

You can also try other fine-grained datasets.

Training TBMSL-Net

If you want to train the MMAL-Net, please download the pretrained model of ResNet-50 and move it to models/pretrained before run python train.py. You may need to change the configurations in config.py if your GPU memory is not enough. The parameter N_list is N1, N2, N3 in the original paper and you can adjust them according to GPU memory. During training, the log file and checkpoint file will be saved in model_path directory.

Evaluation

If you want to test the MMAL-Net, just run python test.py. You need to specify the model_path in test.py to choose the checkpoint model for testing.

Model

We also provide the checkpoint model trained by ourselves, you can download if from Google Drive for CUB-200-2011 or download from here for FGVC-Aircraft. If you test on our provided model, you will get 89.6% and 94.7% test accuracy, respectively.

Reference

If you are interested in our work and want to cite it, please acknowledge the following paper:

@misc{zhang2020threebranch,
    title={Multi-branch and Multi-scale Attention Learning for Fine-Grained Visual Categorization},
    author={Fan Zhang and Meng Li and Guisheng Zhai and Yizhao Liu},
    year={2020},
    eprint={2003.09150},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}