Home

Awesome

<p align="center"> <img src="https://user-images.githubusercontent.com/38581401/240117836-f06199ba-c80d-413a-9cb4-5adc76316bda.png" width=90% alt="MOSEC" /> </p> <p align="center"> <a href="https://discord.gg/Jq5vxuH69W"> <img alt="discord invitation link" src="https://dcbadge.vercel.app/api/server/Jq5vxuH69W?style=flat"> </a> <a href="https://pypi.org/project/mosec/"> <img src="https://badge.fury.io/py/mosec.svg" alt="PyPI version" height="20"> </a> <a href="https://anaconda.org/conda-forge/mosec"> <img src="https://anaconda.org/conda-forge/mosec/badges/version.svg" alt="conda-forge"> </a> <a href="https://pypi.org/project/mosec"> <img src="https://img.shields.io/pypi/pyversions/mosec" alt="Python Version" /> </a> <a href="https://pepy.tech/project/mosec"> <img src="https://static.pepy.tech/badge/mosec/month" alt="PyPi monthly Downloads" height="20"> </a> <a href="https://tldrlegal.com/license/apache-license-2.0-(apache-2.0)"> <img src="https://img.shields.io/github/license/mosecorg/mosec" alt="License" height="20"> </a> <a href="https://github.com/mosecorg/mosec/actions/workflows/check.yml?query=workflow%3A%22lint+and+test%22+branch%3Amain"> <img src="https://github.com/mosecorg/mosec/actions/workflows/check.yml/badge.svg?branch=main" alt="Check status" height="20"> </a> </p> <p align="center"> <i>Model Serving made Efficient in the Cloud.</i> </p>

Introduction

<p align="center"> <img src="https://user-images.githubusercontent.com/38581401/234162688-efd74e46-4063-4624-ac32-b197e4d8e56b.png" width=70% alt="MOSEC" /> </p>

Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API.

Installation

Mosec requires Python 3.7 or above. Install the latest PyPI package for Linux x86_64 or macOS x86_64/ARM64 with:

pip install -U mosec
# or install with conda
conda install conda-forge::mosec

To build from the source code, install Rust and run the following command:

make package

You will get a mosec wheel file in the dist folder.

Usage

We demonstrate how Mosec can help you easily host a pre-trained stable diffusion model as a service. You need to install diffusers and transformers as prerequisites:

pip install --upgrade diffusers[torch] transformers

Write the server

<details> <summary>Click me for server codes with explanations.</summary>

Firstly, we import the libraries and set up a basic logger to better observe what happens.

from io import BytesIO
from typing import List

import torch  # type: ignore
from diffusers import StableDiffusionPipeline  # type: ignore

from mosec import Server, Worker, get_logger
from mosec.mixin import MsgpackMixin

logger = get_logger()

Then, we build an API for clients to query a text prompt and obtain an image based on the stable-diffusion-v1-5 model in just 3 steps.

  1. Define your service as a class which inherits mosec.Worker. Here we also inherit MsgpackMixin to employ the msgpack serialization format<sup>(a)</sup></a>.

  2. Inside the __init__ method, initialize your model and put it onto the corresponding device. Optionally you can assign self.example with some data to warm up<sup>(b)</sup></a> the model. Note that the data should be compatible with your handler's input format, which we detail next.

  3. Override the forward method to write your service handler<sup>(c)</sup></a>, with the signature forward(self, data: Any | List[Any]) -> Any | List[Any]. Receiving/returning a single item or a tuple depends on whether dynamic batching<sup>(d)</sup></a> is configured.

class StableDiffusion(MsgpackMixin, Worker):
    def __init__(self):
        self.pipe = StableDiffusionPipeline.from_pretrained(
            "sd-legacy/stable-diffusion-v1-5", torch_dtype=torch.float16
        )
        self.pipe.enable_model_cpu_offload()
        self.example = ["useless example prompt"] * 4  # warmup (batch_size=4)

    def forward(self, data: List[str]) -> List[memoryview]:
        logger.debug("generate images for %s", data)
        res = self.pipe(data)
        logger.debug("NSFW: %s", res[1])
        images = []
        for img in res[0]:
            dummy_file = BytesIO()
            img.save(dummy_file, format="JPEG")
            images.append(dummy_file.getbuffer())
        return images

[!NOTE]

(a) In this example we return an image in the binary format, which JSON does not support (unless encoded with base64 that makes the payload larger). Hence, msgpack suits our need better. If we do not inherit MsgpackMixin, JSON will be used by default. In other words, the protocol of the service request/response can be either msgpack, JSON, or any other format (check our mixins).

(b) Warm-up usually helps to allocate GPU memory in advance. If the warm-up example is specified, the service will only be ready after the example is forwarded through the handler. However, if no example is given, the first request's latency is expected to be longer. The example should be set as a single item or a tuple depending on what forward expects to receive. Moreover, in the case where you want to warm up with multiple different examples, you may set multi_examples (demo here).

(c) This example shows a single-stage service, where the StableDiffusion worker directly takes in client's prompt request and responds the image. Thus the forward can be considered as a complete service handler. However, we can also design a multi-stage service with workers doing different jobs (e.g., downloading images, model inference, post-processing) in a pipeline. In this case, the whole pipeline is considered as the service handler, with the first worker taking in the request and the last worker sending out the response. The data flow between workers is done by inter-process communication.

(d) Since dynamic batching is enabled in this example, the forward method will wishfully receive a list of string, e.g., ['a cute cat playing with a red ball', 'a man sitting in front of a computer', ...], aggregated from different clients for batch inference, improving the system throughput.

Finally, we append the worker to the server to construct a single-stage workflow (multiple stages can be pipelined to further boost the throughput, see this example), and specify the number of processes we want it to run in parallel (num=1), and the maximum batch size (max_batch_size=4, the maximum number of requests dynamic batching will accumulate before timeout; timeout is defined with the max_wait_time=10 in milliseconds, meaning the longest time Mosec waits until sending the batch to the Worker).

if __name__ == "__main__":
    server = Server()
    # 1) `num` specifies the number of processes that will be spawned to run in parallel.
    # 2) By configuring the `max_batch_size` with the value > 1, the input data in your
    # `forward` function will be a list (batch); otherwise, it's a single item.
    server.append_worker(StableDiffusion, num=1, max_batch_size=4, max_wait_time=10)
    server.run()
</details>

Run the server

<details> <summary>Click me to see how to run and query the server.</summary>

The above snippets are merged in our example file. You may directly run at the project root level. We first have a look at the command line arguments (explanations here):

python examples/stable_diffusion/server.py --help

Then let's start the server with debug logs:

python examples/stable_diffusion/server.py --log-level debug --timeout 30000

Open http://127.0.0.1:8000/openapi/swagger/ in your browser to get the OpenAPI doc.

And in another terminal, test it:

python examples/stable_diffusion/client.py --prompt "a cute cat playing with a red ball" --output cat.jpg --port 8000

You will get an image named "cat.jpg" in the current directory.

You can check the metrics:

curl http://127.0.0.1:8000/metrics

That's it! You have just hosted your stable-diffusion model as a service! 😉

</details>

Examples

More ready-to-use examples can be found in the Example section. It includes:

Configuration

Deployment

Performance tuning

Adopters

Here are some of the companies and individual users that are using Mosec:

Citation

If you find this software useful for your research, please consider citing

@software{yang2021mosec,
  title = {{MOSEC: Model Serving made Efficient in the Cloud}},
  author = {Yang, Keming and Liu, Zichen and Cheng, Philip},
  url = {https://github.com/mosecorg/mosec},
  year = {2021}
}

Contributing

We welcome any kind of contribution. Please give us feedback by raising issues or discussing on Discord. You could also directly contribute your code and pull request!

To start develop, you can use envd to create an isolated and clean Python & Rust environment. Check the envd-docs or build.envd for more information.