Home

Awesome

AccPar

Partition tensors in layers for mutiple accelerators.

To compile:

make

To print partitioning results:

./accpar ./networks/Alexnet.txt -1

If you find this code useful in your research, please cite:

@inproceedings{song2020accpar,
title={Accpar: Tensor partitioning for heterogeneous deep learning accelerators},
author={Song, Linghao and Chen, Fan and Zhuo, Youwei and Qian, Xuehai and Li, Hai and Chen, Yiran},
booktitle={2020 IEEE International Symposium on High Performance Computer Architecture (HPCA)},
pages={342--355},
year={2020},
organization={IEEE}
}

@inproceedings{song2019hypar,
title={Hypar: Towards hybrid parallelism for deep learning accelerator array},
author={Song, Linghao and Mao, Jiachen and Zhuo, Youwei and Qian, Xuehai and Li, Hai and Chen, Yiran},
booktitle={2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)},
pages={56--68},
year={2019},
organization={IEEE}
}