Home

Awesome

NASOA

This repository is for our paper "Faster Task-oriented Online Fine-tuning", which contains the Efficient Training model zoo for fast training and fine-tuning.

Faster Task-oriented Online Fine-tuning

Exploring how to adapt efficient neural networks to various downstream tasks online. overview

Efficient Training (ET-NAS) Model-Zoo

We provide our state-of-the-art Efficient Training model-zoo for fast training and fine-tuning to save computational cost.

compare

Detailed performances of our ET-NAS:

Model NameMParamGMacTop-1Inference Time (ms)Training Step Time (ms)Checkpoints
ET-NAS-A2.60.2362.065.3014.74Google Drive
ET-NAS-B3.90.3966.925.9215.78Google Drive
ET-NAS-C7.10.5871.298.9426.28Google Drive
ET-NAS-D15.21.5574.4614.5436.30Google Drive
ET-NAS-E21.42.6176.8725.3461.95Google Drive
ET-NAS-F28.42.3178.8033.8393.04Google Drive
ET-NAS-G49.35.6880.4153.08133.97Google Drive
ET-NAS-H44.05.3380.9276.80193.40Google Drive
ET-NAS-I72.413.1381.3894.60265.13Google Drive
ET-NAS-J103.018.1682.08131.92370.28Google Drive
ET-NAS-K87.327.5182.42185.75505.00Google Drive
ET-NAS-L130.423.4682.65191.89542.52Google Drive

How to use ET-NAS

Use ET-NAS in your own project

from etnas import ETNas, MODEL_MAPPINGS
model_name = "ET-NAS-A"

# construct a ET-NAS network
network = ETNas(MODEL_MAPPINGS[model_name])

# load pre-trained model weights
network.load_state_dict(torch.load("{}.pth".format(MODEL_MAPPINGS[model_name])))

You can download alll pre-trained model directly from: Google Drive.
After unzipping the file, the file structure should be like:

ET-NAS
──ET-NAS-A
│      2-_32_2-11-112-1121112.pth
├─ET-NAS-B
│      031-_32_1-1-221-11121.pth
├─ET-NAS-C
│      011-_32_2-211-2-111122.pth
├─ET-NAS-D
│      031-_64_1-1-221-11121.pth
├─ET-NAS-E
│      10001-_64_4-111-11122-1111111111111112.pth
├─ET-NAS-F
│      011-_64_21-211-121-11111121.pth
├─ET-NAS-G
│      10001-_64_4-111111111-211112111112-11111.pth
├─ET-NAS-H
│      211-_64_41-211-121-11111121.pth
├─ET-NAS-I
│      02031-a02_64_111-2111-21111111111111111111111-211.pth
├─ET-NAS-J
│      211-_64_411-2111-21111111111111111111111-211.pth
├─ET-NAS-K
│      02031-a02_64_1121-111111111111111111111111111-21111111211111-1.pth
└─ET-NAS-L
       23311-a02c12_64_211-2111-21111111111111111111111-211.pth

where each checkpoint file is named with the corresponding architecture encoding and saved into a folder with the series name.

Example: Running test on Imagenet

python test.py \
       --model-name ET-NAS-A \ 
       --data-dir imagenet \ # path to dataset, imagenet for example
       --model-zoo-dir model_zoo \ # path to pre-trained models, modelz_zoo for example
       --batch-size 256 \ # batch size on each card
       --num-workers 4 \ # number of workers
       --device cuda:0 # device to run the network, set "cuda" for gpus and "cpu" for cpu