Home

Awesome

<!-- # Understanding and Accelerating NeuralArchitecture Search with Training-Free andTheory-Grounded Metrics [[PDF](https://arxiv.org/pdf/2108.11939.pdf)] -->

Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics [PDF]

<!-- [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/VITA-Group/TENAS.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/VITA-Group/TENAS/context:python) -->

MIT licensed

Wuyang Chen*, Xinyu Gong*, Yunchao Wei, Humphrey Shi, Zhicheng Yan, Yi Yang, and Zhangyang Wang

<!-- In ICLR 2021. -->

Note

  1. This repo is still under development. Scripts are excutable but some CUDA errors may occur.
  2. Due to IP issue, we can only release the code for NAS via reinforcement learning and evolution, but not FP-NAS.

Overview

We present TEG-NAS, a generalized training-free neural architecture search method that can significantly reduce time cost of popular search methods (no gradient descent at all!) with high-quality performance.

Highlights:

<!-- * **SOTA**: TE-NAS achieved extremely fast search speed (one 1080Ti, 20 minutes on NAS-Bench-201 space / four hours on DARTS space on ImageNet) and maintains competitive accuracy. --> <!-- <p align="center"> <img src="images/????.png" alt="201" width="550"/></br> </p> <p align="center"> <img src="images/????.png" alt="darts_cifar10" width="550"/></br> </p> <p align="center"> <img src="images/????.png" alt="darts_imagenet" width="550"/></br> </p> --> <!-- ## Methods <p align="center"> <img src="images/????.png" alt="algorithm" width="800"/></br> </p> -->

Prerequisites

This repository has been tested on GTX 1080Ti. Configurations may need to be changed on different platforms.

Installation

git clone https://github.com/chenwydj/TEGNAS.git
cd TEGNAS
pip install -r requirements.txt

Usage

0. Prepare the dataset

1. Search

NAS-Bench-201 Space

Reinforcement Learning
python reinforce_launch.py --space nas-bench-201 --dataset cifar10 --gpu 0
python reinforce_launch.py --space nas-bench-201 --dataset cifar100 --gpu 0
python reinforce_launch.py --space nas-bench-201 --dataset ImageNet16-120 --gpu 0
Evolution
python evolution_launch.py --space nas-bench-201 --dataset cifar10 --gpu 0
python evolution_launch.py --space nas-bench-201 --dataset cifar100 --gpu 0
python evolution_launch.py --space nas-bench-201 --dataset ImageNet16-120 --gpu 0

DARTS Space (NASNET)

Reinforcement Learning
python reinforce_launch.py --space darts --dataset cifar10 --gpu 0
python reinforce_launch.py --space darts --dataset imagenet-1k --gpu 0
Evolution
python evolution_launch.py --space darts --dataset cifar10 --gpu 0
python evolution_launch.py --space darts --dataset imagenet-1k --gpu 0

2. Evaluation

Citation

@inproceedings{chen2021tegnas,
  title={Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics},
  author={Chen, Wuyang and Gong, Xinyu and Wei, Yunchao and Shi, Humphrey and Yan, Zhicheng and Yang, Yi and Wang, Zhangyang},
  year={2021}
}
<!-- booktitle={International Conference on Learning Representations}, -->

Acknowledgement