Home

Awesome

PyTorch Implementation Of WS-DAN

Introduction

This is a PyTorch implementation of the paper "See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification". It also has an official TensorFlow implementation WS_DAN. The core part of the code refers to the official version, and finally,the performance almost reaches the results reported in the paper.

Environment

Result

DatasetACC(this repo)ACC Refine(this repo)ACC(paper)
CUB-200-201188.2089.3089.4
FGVC-Aircraft93.1593.2293.0
Stanford Cars94.1394.4394.5
Stanford Dogs86.0386.4692.2

You can download pretrained models from WS_DAN_Onedrive

Install

  1. Clone the repo
git clone https://github.com/wvinzh/WS_DAN_PyTorch
  1. Prepare dataset
DatasetObjectCategoryTrainingTesting
CUB-200-2011Bird20059945794
Stanford-CarsCar10066673333
fgvc-aircraftAircraft19681448041
Stanford-DogsDogs120120008580
Fine-grained
├── CUB_200_2011
│   ├── attributes
│   ├── bounding_boxes.txt
│   ├── classes.txt
│   ├── image_class_labels.txt
│   ├── images
│   ├── images.txt
│   ├── parts
│   ├── README
├── Car
│   ├── cars_test
│   ├── cars_train
│   ├── devkit
│   └── tfrecords
├── fgvc-aircraft-2013b
│   ├── data
│   ├── evaluation.m
│   ├── example_evaluation.m
│   ├── README.html
│   ├── README.md
│   ├── vl_argparse.m
│   ├── vl_pr.m
│   ├── vl_roc.m
│   └── vl_tpfp.m
├── dogs
│   ├── file_list.mat
│   ├── Images
│   ├── test_list.mat
│   └── train_list.mat
python utils/convert_data.py  --dataset_name bird --root_path .../Fine-grained/CUB_200_2011
├── data
│   ├── Aircraft -> /your_root_path/Fine-grained/fgvc-aircraft-2013b/data
│   ├── aircraft_test.txt
│   ├── aircraft_train.txt
│   ├── Bird -> /your_root_path/Fine-grained/CUB_200_2011
│   ├── bird_test.txt
│   ├── bird_train.txt
│   ├── Car -> /your_root_path/Fine-grained/Car
│   ├── car_test.txt
│   ├── car_train.txt
│   ├── Dog -> /your_root_path/Fine-grained/dogs
│   ├── dog_test.txt
│   └── dog_train.txt

Usage

python train_bap.py train\
    --model-name inception \
    --batch-size 12 \
    --dataset car \
    --image-size 512 \
    --input-size 448 \
    --checkpoint-path checkpoint/car \
    --optim sgd \
    --scheduler step \
    --lr 0.001 \
    --momentum 0.9 \
    --weight-decay 1e-5 \
    --workers 4 \
    --parts 32 \
    --epochs 80 \
    --use-gpu \
    --multi-gpu \
    --gpu-ids 0,1 \

A simple way is to use sh train_bap.sh or run backgroud with logs using cmd nohup sh train_bap.sh 1>train.log 2>error.log &

python train_bap.py test\
    --model-name inception \
    --batch-size 12 \
    --dataset car \
    --image-size 512 \
    --input-size 448 \
    --checkpoint-path checkpoint/car/model_best.pth.tar \
    --workers 4 \
    --parts 32 \
    --use-gpu \
    --multi-gpu \
    --gpu-ids 0,1 \