Home

Awesome

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

This repository contains our PyTorch training code, evaluation code and pretrained models for AttentiveNAS.

[Update 06/21] Recenty, we have improved AttentiveNAS using an adaptive knowledge distillation training strategy, see our AlphaNet repo for more details of this work. AlphaNet has been accepted by ICML'21.

[Update 07/21] We provide an example code for searching the best models of FLOPs vs. accuracy trade-offs at here.

For more details, please see AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling by Dilin Wang, Meng Li, Chengyue Gong and Vikas Chandra.

If you find this repo useful in your research, please consider citing our work:

@article{wang2020attentivenas,
  title={AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling},
  author={Wang, Dilin and Li, Meng and Gong, Chengyue and Chandra, Vikas},
  journal={arXiv preprint arXiv:2011.09011},
  year={2020}
}

Evaluation

To reproduce our results:

Training

To train our AttentiveNAS models from scratch, please run

python train_attentive_nas.py --config-file configs/train_attentive_nas_models.yml --machine-rank ${machine_rank} --num-machines ${num_machines} --dist-url ${dist_url}

We adopt SGD training on 64 GPUs. The mini-batch size is 32 per GPU; all training hyper-parameters are specified in train_attentive_nas_models.yml.

Additional data

License

The majority of AttentiveNAS is licensed under CC-BY-NC, however portions of the project are available under separate license terms: Once For All is licensed under the Apache 2.0 license.

Contributing

We actively welcome your pull requests! Please see CONTRIBUTING and CODE_OF_CONDUCT for more info.