Awesome
AtomNAS: Fine-Grained End-to-End Neural Architecture Search [PDF]
Updates
- [Mar 2020] A clean mobilenet-series implementation is provided.
- [Feb 2020] Simplify validation process, released the pretrained models. Conflict with previous version.
Overview
This is the codebase (including search) for ICLR 2020 paper AtomNAS: Fine-Grained End-to-End Neural Architecture Search.
Setup
Distributed Training
Set the following ENV variable:
$DATA_ROOT: Path to data root
$METIS_WORKER_0_HOST: IP address of worker 0
$METIS_WORKER_0_PORT: Port used for initializing distributed environment
$METIS_TASK_INDEX: Index of task
$ARNOLD_WORKER_NUM: Number of workers
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory
Non-Distributed Training (Not Recommend)
Set the following ENV variable:
$DATA_ROOT: Path to data root
$ARNOLD_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$ARNOLD_OUTPUT: Output directory
Reproduce AtomNAS results
For Table 1
- AtomNAS-A:
bash scripts/run.sh apps/slimming/shrink/atomnas_a.yml
- AtomNAS-B:
bash scripts/run.sh apps/slimming/shrink/atomnas_b.yml
- AtomNAS-C:
bash scripts/run.sh apps/slimming/shrink/atomnas_c.yml
If everything is OK, you should get similar results.
Pretrained Models could be downloaded from onedrive
Testing
For AtomNAS:
FILE=$(realpath {{log_dir_path}}) checkpoint=ckpt ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_shrink.yml
For AtomNAS+:
TRAIN_CONFIG=$(realpath {{train_config_path}}) ATOMNAS_VAL=True bash scripts/run.sh apps/eval/eval_se.yml --pretrained {{ckpt_path}}
Related Info
-
Requirements
- See
requirements.txt
- See
-
Environment
- The code is developed using python 3. NVIDIA GPUs are needed. The code is developed and tested using 4 servers with 32 NVIDIA V100 GPU cards. Other platforms or GPU cards are not fully tested.
-
Dataset
- Prepare ImageNet data following pytorch example.
- Optional: Generate lmdb dataset by
utils/lmdb_dataset.py
. If not, please overwritedataset:imagenet1k_lmdb
in yaml todataset:imagenet1k
. - The directory structure of
$DATA_ROOT
should look like this:${DATA_ROOT} ├── imagenet └── imagenet_lmdb
-
Miscellaneous
- The codebase is a general ImageNet training framework using yaml config with several extension under
apps
dir, based on PyTorch.- YAML config with additional features
${ENV}
in yaml config._include
for hierachy config._default
key for overwriting.xxx.yyy.zzz
for partial overwriting.
--{{opt}} {{new_val}}
for command line overwriting.
- YAML config with additional features
- The codebase is a general ImageNet training framework using yaml config with several extension under
Acknowledgment
This repo is based on slimmable_networks and benefits from the following projects
Thanks the contributors of these repos!
Citation
If you find this work or code is helpful in your research, please cite:
@inproceedings{
mei2020atomnas,
title={Atom{NAS}: Fine-Grained End-to-End Neural Architecture Search},
author={Jieru Mei and Yingwei Li and Xiaochen Lian and Xiaojie Jin and Linjie Yang and Alan Yuille and Jianchao Yang},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=BylQSxHFwr}
}