Awesome
Yet Another Pytorch Distributed MobileNetV2-based Networks Implementation
This repo reproduces some MobileNetV2-based networks which reach similar or even higher performance than the original reported results.
Model Zoo
All of the following models are trained by MnasNet training schedule.
FLOPs | Parameters | Top-1 Report | Top-1 | Top-1 Calib | |
---|---|---|---|---|---|
MobileNetV2 | 300M | 3.4M | 72.0 | 73.6 | - |
Proxyless (mobile) | 320M | 4.1M | 74.6 | 74.9 | 75.1 |
SinglePath | 334M | 4.4M | 75.0 | 75.0 | 75.1 |
AtomNAS-A | 258M | 3.9M | - | - | 74.6 |
AtomNAS-B | 326M | 4.4M | - | - | 75.5 |
AtomNAS-C | 360M | 4.7M | - | - | 75.9 |
AtomNAS-A+ | 260M | 4.7M | - | - | 76.3 |
AtomNAS-B+ | 329M | 5.5M | - | - | 77.2 |
AtomNAS-C+ | 363M | 5.9M | - | - | 77.6 |
AutoNL-S | 267M | 4.4M | - | - | 76.5 |
AutoNL-L<sup>1</sup> | 353M | 5.6M | - | - | 77.5 |
AtomNAS series comes from our ICLR 2020 paper AtomNAS: Fine-Grained End-to-End Neural Architecture Search whose code resides in AtomNAS.
AutoNL series comes from our CVPR 2020 paper Neural Architecture Search for Lightweight Non-Local Networks, and the original tf version resides in AutoNL.
Pretrained Models could be downloaded from onedrive
<sup>1</sup> A bit different with tf implementation because of lacking Auto-Augmentation.
Setup
Distributed Training
Set the following ENV variable:
$DATA_ROOT: Path to data root
$METIS_WORKER_0_HOST: IP address of worker 0
$METIS_WORKER_0_PORT: Port used for initializing distributed environment
$METIS_TASK_INDEX: Index of task
$ARNOLD_WORKER_NUM: Number of workers
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory
Non-Distributed Training (Not Recommend)
Set the following ENV variable:
$DATA_ROOT: Path to data root
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory
Training
Take mobilenet v2 as example, for distributed training:
bash scripts/run.sh ./apps/mobilenet/mobilenet_v2_mnas.yml
For non-distributed training:
bash scripts/run_non_distributed_no_copy.sh ./apps/mobilenet/mobilenet_v2_mnas.yml
Testing
For MobileNet, Proxyless, SinglePathNAS, AutoNL:
TRAIN_CONFIG=$(realpath {{train_config_path}}) ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval.yml --pretrained {{ckpt_path}}
For AtomNAS:
FILE=$(realpath {{log_dir_path}}) checkpoint=ckpt ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_shrink.yml
For AtomNAS+:
TRAIN_CONFIG=$(realpath {{train_config_path}}) ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_se.yml --pretrained {{ckpt_path}}
Related Info
-
Requirements
- See
requirements.txt
- See
-
Environment
- The code is developed using python 3. NVIDIA GPUs are needed. The code is developed and tested using 4 servers with 32 NVIDIA V100 GPU cards. Other platforms or GPU cards are not fully tested.
-
Dataset
- Prepare ImageNet data following pytorch example.
- Optional: Generate lmdb dataset by
utils/lmdb_dataset.py
. If not, please overwritedataset:imagenet1k_lmdb
in yaml todataset:imagenet1k
. - The directory structure of
$DATA_ROOT
should look like this:${DATA_ROOT} ├── imagenet └── imagenet_lmdb
-
Miscellaneous
- The codebase is a general ImageNet training framework using yaml config with several extension under
apps
dir, based on PyTorch.- YAML config with additional features
${ENV}
in yaml config._include
for hierachy config._default
key for overwriting.xxx.yyy.zzz
for partial overwriting.
--{{opt}} {{new_val}}
for command line overwriting.
- YAML config with additional features
- The codebase is a general ImageNet training framework using yaml config with several extension under
Acknowledgment
This repo is based on slimmable_networks and benefits from the following projects
Thanks the contributors of these repos!
Citation
If you find this work or code is helpful in your research, please cite:
@inproceedings{
mei2020atomnas,
title={Atom{NAS}}: Fine-Grained End-to-End Neural Architecture Search},
author={Jieru Mei and Yingwei Li and Xiaochen Lian and Xiaojie Jin and Linjie Yang and Alan Yuille and Jianchao Yang},
booktitle={International Conference on Learning Representations},
year={2020},
}