Home

Awesome

<!-- <div align="center"> <img src="https://user-images.githubusercontent.com/12018307/170214566-b611b131-fff1-41c0-9447-786a8a6f0bac.png" width = "600" height = "148" alt="Architecture" align=center /> </div> -->

Energon-AI

GitHub license

A service framework for large-scale model inference, Energon-AI has the following characteristics:

For models trained by Colossal-AI, they can be easily transferred to Energon-AI. For single-device models, they require manual coding works to introduce tensor parallelism and pipeline parallelism.

Installation

Install from source

$ git clone git@github.com:hpcaitech/EnergonAI.git
$ pip install -r requirements.txt
$ pip install .

Use docker

$ docker pull hpcaitech/energon-ai:latest

Build an online OPT service in 5 minutes

  1. Download OPT model: To launch the distributed inference service quickly, you can download the checkpoint of OPT-125M here. You can get details for loading other sizes of models here.

  2. Launch an HTTP service: To launch a service, we need to provide python scripts to describe the model type and related configurations, and start an http service. An OPT example is EnergonAI/examples/opt.
    The entrance of the service is a bash script server.sh. The config of the service is at opt_config.py, which defines the model type, the checkpoint file path, the parallel strategy, and http settings. You can adapt it for your own case. For example, set the model class as opt_125M and set the correct checkpoint path as follows. Set the tensor parallelism degree the same as your gpu number.

        model_class = opt_125M
        checkpoint = 'your_file_path'
        tp_init_size = #gpu
    

    Now, we can launch a service:

        bash server.sh
    

    Then open https://[ip]:[port]/docs in your browser and try out!

Publication

You can find technical details in our blog and manuscript:

Build an online OPT service using Colossal-AI in 5 minutes

EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models

@misc{du2022energonai, 
      title={EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models}, 
      author={Jiangsu Du and Ziming Liu and Jiarui Fang and Shenggui Li and Yongbin Li and Yutong Lu and Yang You},
      year={2022},
      eprint={2209.02341},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Contributing

If interested in making your own contribution to the project, please refer to Contributing for guidance.

Thanks so much!