Home

Awesome

Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective [PDF]

Language grade: Python MIT licensed

Wuyang Chen, Xinyu Gong, Zhangyang Wang

In ICLR 2021.

Overview

<!-- <p align="center"> <img src="images/????.gif" alt="ntk_regions" width="300"/></br> <span align="center">????</span> </p> -->

We present TE-NAS, the first published training-free neural architecture search method with extremely fast search speed (no gradient descent at all!) and high-quality performance.

Highlights:

<!-- <p align="center"> <img src="images/????.png" alt="201" width="550"/></br> </p> <p align="center"> <img src="images/????.png" alt="darts_cifar10" width="550"/></br> </p> <p align="center"> <img src="images/????.png" alt="darts_imagenet" width="550"/></br> </p> --> <!-- ## Methods <p align="center"> <img src="images/????.png" alt="algorithm" width="800"/></br> </p> -->

Prerequisites

This repository has been tested on GTX 1080Ti. Configurations may need to be changed on different platforms.

Installation

git clone https://github.com/chenwydj/TENAS.git
cd TENAS
pip install -r requirements.txt

Usage

0. Prepare the dataset

1. Search

NAS-Bench-201 Space

python prune_launch.py --space nas-bench-201 --dataset cifar10 --gpu 0
python prune_launch.py --space nas-bench-201 --dataset cifar100 --gpu 0
python prune_launch.py --space nas-bench-201 --dataset ImageNet16-120 --gpu 0

DARTS Space (NASNET)

python prune_launch.py --space darts --dataset cifar10 --gpu 0
python prune_launch.py --space darts --dataset imagenet-1k --gpu 0

2. Evaluation

Citation

@inproceedings{chen2020tenas,
  title={Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective},
  author={Chen, Wuyang and Gong, Xinyu and Wang, Zhangyang},
  booktitle={International Conference on Learning Representations},
  year={2021}
}

Acknowledgement