Home

Awesome

This repo is a Ascend NPU implement forkd from SPACH.
To learn more about Ascend NPU, click here.

This repository contains Pytorch evaluation code, training code and pretrained models for the following projects:

Other unofficial implementations:

Main Results on ImageNet with Pretrained Models

nameacc@1#paramsFLOPsurl
SPACH-Conv-MS-S81.644M7.2Ggithub
SPACH-Trans-MS-S82.940M7.6Ggithub
SPACH-MLP-MS-S82.146M8.2Ggithub
SPACH-Hybrid-MS-S83.763M11.2Ggithub
SPACH-Hybrid-MS-S+83.963M12.3Ggithub
sMLPNet-T81.924M5.0G
sMLPNet-S83.149M10.3Ggithub
sMLPNet-B83.466M14.0Ggithub
Shift-T / light79.420M3.0Ggithub
Shift-T81.729M4.5Ggithub
Shift-S / light81.634M5.7Ggithub
Shift-S82.850M8.8Ggithub

Usage

NPU

NPU means neural-network processing units.

To enable NPU, use parser argument

--npu

Install

First, clone the repo and install requirements:

git clone https://github.com/Leoooo333/SPACH-1
pip3 install torchvision==0.6.0
pip3 install einops==0.4.1
pip3 install --no-deps timm==0.4.5

# other recommended requirements
apex==0.1+ascend.20220315
torch==1.5.0+ascend.post5.20220315
source ./test/env_npu.sh

Data preparation

Download and extract ImageNet train and val images from http://image-net.org/. The directory structure is the standard layout for the torchvision datasets.ImageFolder, and the training and validation data is expected to be in the train/ folder and val/ folder respectively:

/path/to/imagenet/
  train/
    class1/
      img1.jpeg
    class2/
      img2.jpeg
  val/
    class1/
      img3.jpeg
    class/2
      img4.jpeg

Training & Evaluation

One can simply call the following script to run training process. Distributed training is recommended even on single GPU node.

To train a model on NPU, run main.py with the desired model architecture and the path to the ImageNet dataset:

# training 1p accuracy
bash ./test/train_full_1p.sh --model=model_name \
--data_path=real_data_path

# training 1p performance
bash ./test/train_performance_1p.sh --model=model_name \
--data_path=real_data_path

# training 8p accuracy
bash ./test/train_full_8p.sh --model=model_name \
--data_path=real_data_path

# training 8p performance
bash ./test/train_performance_8p.sh --model=model_name \
--data_path=real_data_path

#evaluate 8p accuracy
bash ./test/train_eval_8p.sh --model=model_name \
--data_path=real_data_path \
--resume=checkpoint_path

Citation

@article{zhao2021battle,
  title={A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP},
  author={Zhao, Yucheng and Wang, Guangting and Tang, Chuanxin and Luo, Chong and Zeng, Wenjun and Zha, Zheng-Jun},
  journal={arXiv preprint arXiv:2108.13002},
  year={2021}
}

@article{tang2021sparse,
  title={Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?},
  author={Tang, Chuanxin and Zhao, Yucheng and Wang, Guangting and Luo, Chong and Xie, Wenxuan and Zeng, Wenjun},
  journal={arXiv preprint arXiv:2109.05422},
  year={2021}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Acknowledgement

Our code are built on top of DeiT. We test throughput following Swin Transformer