Awesome
<!-- # Understanding and Accelerating NeuralArchitecture Search with Training-Free andTheory-Grounded Metrics [[PDF](https://arxiv.org/pdf/2108.11939.pdf)] -->Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics [PDF]
<!-- [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/VITA-Group/TENAS.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/VITA-Group/TENAS/context:python) -->Wuyang Chen*, Xinyu Gong*, Yunchao Wei, Humphrey Shi, Zhicheng Yan, Yi Yang, and Zhangyang Wang
<!-- In ICLR 2021. -->Note
- This repo is still under development. Scripts are excutable but some CUDA errors may occur.
- Due to IP issue, we can only release the code for NAS via reinforcement learning and evolution, but not FP-NAS.
Overview
We present TEG-NAS, a generalized training-free neural architecture search method that can significantly reduce time cost of popular search methods (no gradient descent at all!) with high-quality performance.
Highlights:
- Trainig-free NAS: for three popular NAS methods (Reinforcement Learning, Evolution, Differentiable), we adopt our TEG-NAS method into them and achieved extreme fast neural architecture search without a single gradient descent.
- Bridging the theory-application gap: We identified three training-free indicators to rank the quality of deep networks: the condition number of their NTKs ("Trainability"), and the number of linear regions in their input space ("Expressivity"), and the error of NTK kernel regression ("Generalization").
Prerequisites
- Ubuntu 16.04
- Python 3.6.9
- CUDA 11.0 (lower versions may work but were not tested)
- NVIDIA GPU + CuDNN v7.6
This repository has been tested on GTX 1080Ti. Configurations may need to be changed on different platforms.
Installation
- Clone this repo:
git clone https://github.com/chenwydj/TEGNAS.git
cd TEGNAS
- Install dependencies:
pip install -r requirements.txt
Usage
0. Prepare the dataset
- Please follow the guideline here to prepare the CIFAR-10/100 and ImageNet dataset, and also the NAS-Bench-201 database.
- Remember to properly set the
TORCH_HOME
anddata_paths
in theprune_launch.py
.
1. Search
NAS-Bench-201 Space
Reinforcement Learning
python reinforce_launch.py --space nas-bench-201 --dataset cifar10 --gpu 0
python reinforce_launch.py --space nas-bench-201 --dataset cifar100 --gpu 0
python reinforce_launch.py --space nas-bench-201 --dataset ImageNet16-120 --gpu 0
Evolution
python evolution_launch.py --space nas-bench-201 --dataset cifar10 --gpu 0
python evolution_launch.py --space nas-bench-201 --dataset cifar100 --gpu 0
python evolution_launch.py --space nas-bench-201 --dataset ImageNet16-120 --gpu 0
DARTS Space (NASNET)
Reinforcement Learning
python reinforce_launch.py --space darts --dataset cifar10 --gpu 0
python reinforce_launch.py --space darts --dataset imagenet-1k --gpu 0
Evolution
python evolution_launch.py --space darts --dataset cifar10 --gpu 0
python evolution_launch.py --space darts --dataset imagenet-1k --gpu 0
2. Evaluation
- For architectures searched on
nas-bench-201
, the accuracies are immediately available at the end of search (from the console output). - For architectures searched on
darts
, please use DARTS_evaluation for training the searched architecture from scratch and evaluation. Genotypes of our searched architectures are listed ingenotypes.py
Citation
@inproceedings{chen2021tegnas,
title={Understanding and Accelerating Neural Architecture Search with Training-Free and Theory-Grounded Metrics},
author={Chen, Wuyang and Gong, Xinyu and Wei, Yunchao and Shi, Humphrey and Yan, Zhicheng and Yang, Yi and Wang, Zhangyang},
year={2021}
}
<!-- booktitle={International Conference on Learning Representations}, -->
Acknowledgement
- Code base from NAS-Bench-201.