Awesome
NAS-Bench-Macro
This repository includes the benchmark and code for NAS-Bench-Macro in paper "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021.
NAS-Bench-Macro is a NAS benchmark on macro search space. The NAS-Bench-Macro consists of 6561 networks and their test accuracies, parameters, and FLOPs on CIFAR-10 dataset.
Each architecture in NAS-Bench-Macro is trained from scratch isolatedly.
Benchmark
All the evaluated architectures are stored in file nas-bench-macro_cifar10.json
with the following format:
{
arch1: {
test_acc: [float, float, float], // the test accuracies of three independent training
mean_acc: float, // mean accuracy
std: float, // the standard deviation of test accuracies
params: int, // parameters
flops: int, // FLOPs
},
arch2: ......
}
Search Space
The search space of NAS-Bench-Macro is conducted with 8 searching layers; each layer contains 3 candidate blocks, marked as Identity, MB3_K3, and MB6_K5.
- Identity: identity connection (encoded as '0')
- MB3_K3: MobileNetV2 block with kernel size 3 and expansion ratio 3
- MB6_K5: MobileNetV2 block with kernel size 5 and expansion ratio 6
Network structure
Statistics
Visualization of the best architecture
Histograms
Reproduce the Results
Requirements
torch>=1.0.1
torchvision
Training scripts
cd train
python train_benchmark.py
The test result of each architecture will be stored into train/bench-cifar10/<arch>.txt
After all the architectures are trained, you can collect the results into a final benchmark file:
python collect_benchmark.py
Citation
If you find that NAS-Bench-Macro helps your research, please consider citing it:
@inproceedings{su2021prioritized,
title={Prioritized Architecture Sampling with Monto-Carlo Tree Search},
author={Su, Xiu and Huang, Tao and Li, Yanxi and You, Shan and Wang, Fei and Qian, Chen and Zhang, Changshui and Xu, Chang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10968--10977},
year={2021}
}