Home

Awesome

<p align="center"> <img width="500px" src="https://user-images.githubusercontent.com/18649508/141055752-2f429618-2707-4feb-abe3-74767c2b1976.gif"/> </p> <hr/> <div align="center"> <a href="https://tutorials-8ro.pages.dev/"><img src="https://img.shields.io/badge/tutorials-passing-green" alt="tutorials"></a> <a href="http://bagua.readthedocs.io/?badge=latest"><img src="https://readthedocs.org/projects/bagua/badge/?version=latest" alt="Documentation Status"></a> <a href="https://pypi.org/project/bagua/"><img src="https://pepy.tech/badge/bagua/month" alt="Downloads"></a> <a href="https://hub.docker.com/r/baguasys/bagua"><img src="https://img.shields.io/docker/pulls/baguasys/bagua" alt="Docker Pulls"></a> <a href="https://hub.docker.com/r/baguasys/bagua"><img src="https://img.shields.io/docker/cloud/build/baguasys/bagua" alt="Docker Cloud Build Status"></a> <a href="https://github.com/BaguaSys/bagua/blob/master/LICENSE"><img src="https://img.shields.io/github/license/BaguaSys/bagua" alt="GitHub license"></a> </div> <br/>

WARNING: THIS PROJECT IS CURRENTLY NOT MAINTAINED, DUE TO COMPANY REORGANIZATION.

Bagua is a deep learning training acceleration framework for PyTorch developed by AI platform@Kuaishou Technology and DS3 Lab@ETH Zürich. Bagua currently supports:

Its effectiveness has been evaluated in various scenarios, including VGG and ResNet on ImageNet, BERT Large and many industrial applications at Kuaishou.

Links

Performance

<p align="center"> <img src="https://tutorials-8ro.pages.dev/benchmark/figures/e2e_vgg16_128.png" width="600"/> </p> <p align="center"> The performance of different systems and algorithms on VGG16 with 128 GPUs under different network bandwidth. </p> <br/> <br/> <p align="center"> <img src="https://tutorials-8ro.pages.dev/benchmark/figures/tradeoff_network_bert-large-bandwidth.png" width="250"/><img src="https://tutorials-8ro.pages.dev/benchmark/figures/tradeoff_network_bert-large-latency.png" width="250"/><img src="https://tutorials-8ro.pages.dev/benchmark/figures/tradeoff_network_legend.png" width="260"/> </p> <p align="center"> Epoch time of BERT-Large Finetune under different network conditions for different systems. </p>

For more comprehensive and up to date results, refer to Bagua benchmark page.

Installation

Wheels (precompiled binary packages) are available for Linux (x86_64). Package names are different depending on your CUDA Toolkit version (CUDA Toolkit version is shown in nvcc --version).

CUDA Toolkit versionInstallation command
>= v10.2pip install bagua-cuda102
>= v11.1pip install bagua-cuda111
>= v11.3pip install bagua-cuda113
>= v11.5pip install bagua-cuda115
>= v11.6pip install bagua-cuda116

Add --pre to pip install commands to install pre-release (development) versions. See Bagua tutorials for quick start guide and more installation options.

Quick Start on AWS

Thanks to the Amazon Machine Images (AMI), we can provide users an easy way to deploy and run Bagua on AWS EC2 clusters with flexible size of machines and a wide range of GPU types. Users can find our pre-installed Bagua image on EC2 by the unique AMI-ID that we publish here. Note that AMI is a regional resource, so please make sure you are using the machines in same reginon as our AMI.

Bagua versionAMI IDRegion
0.6.3ami-0e719d0e3e42b397eus-east-1
0.9.0ami-0f01fd14e9a742624us-east-1

To manage the EC2 cluster more efficiently, we use Starcluster as a toolkit to manipulate the cluster. In the config file of Starcluster, there are a few configurations that need to be set up by users, including AWS credentials, cluster settings, etc. More information regarding the Starcluster configuration can be found in this tutorial.

For example, we create a EC2 cluster with 4 machines, each of which has 8 V100 GPUs (p3.16xlarge). The cluster is based on the Bagua AMI we pre-installed in us-east-1 region. Then the config file of Starcluster would be:

# region of EC2 instances, here we choose us_east_1
AWS_REGION_NAME = us-east-1
AWS_REGION_HOST = ec2.us-east-1.amazonaws.com
# AMI ID of Bagua
NODE_IMAGE_ID = ami-0e719d0e3e42b397e
# number of instances
CLUSTER_SIZE = 4
# instance type
NODE_INSTANCE_TYPE = p3.16xlarge

With above setup, we created two identical clusters to benchmark a synthesized image classification task over Bagua and Horovod, respectively. Here is the screen recording video of this experiment.

<p align="center"> <a href="https://youtu.be/G8o5HVYZJvs"><img src="https://user-images.githubusercontent.com/18649508/136463585-ba911d20-9088-48b7-ab32-fc3e465c31b8.png" width="600"/></a> </p>

Cite Bagua

% System Overview
@misc{gan2021bagua,
  title={BAGUA: Scaling up Distributed Learning with System Relaxations}, 
  author={Shaoduo Gan and Xiangru Lian and Rui Wang and Jianbin Chang and Chengjun Liu and Hongmei Shi and Shengzhuo Zhang and Xianghong Li and Tengxu Sun and Jiawei Jiang and Binhang Yuan and Sen Yang and Ji Liu and Ce Zhang},
  year={2021},
  eprint={2107.01499},
  archivePrefix={arXiv},
  primaryClass={cs.LG}
}

% Theory on System Relaxation Techniques
@book{liu2020distributed,
  title={Distributed Learning Systems with First-Order Methods: An Introduction},
  author={Liu, J. and Zhang, C.},
  isbn={9781680837018},
  series={Foundations and trends in databases},
  url={https://books.google.com/books?id=vzQmzgEACAAJ},
  year={2020},
  publisher={now publishers}
}

Contributors

<a href="https://github.com/BaguaSys/bagua/graphs/contributors"> <img src="https://contrib.rocks/image?repo=BaguaSys/bagua" /> </a>