Home

Awesome

<div align="center">

Modelz LLM

</div> <p align=center> <a href="https://discord.gg/KqswhpVgdU"><img alt="discord invitation link" src="https://dcbadge.vercel.app/api/server/KqswhpVgdU?style=flat"></a> <a href="https://twitter.com/TensorChord"><img src="https://img.shields.io/twitter/follow/tensorchord?style=social" alt="trackgit-views" /></a> </p>

Modelz LLM is an inference server that facilitates the utilization of open source large language models (LLMs), such as FastChat, LLaMA, and ChatGLM, on either local or cloud-based environments with OpenAI compatible API.

Features

Quick Start

Install

pip install modelz-llm
# or install from source
pip install git+https://github.com/tensorchord/modelz-llm.git[gpu]

Run the self-hosted API server

Please first start the self-hosted API server by following the instructions:

modelz-llm -m bigscience/bloomz-560m --device cpu

Currently, we support the following models:

Model NameHuggingface ModelDocker ImageRecommended GPU
FastChat T5lmsys/fastchat-t5-3b-v1.0modelzai/llm-fastchat-t5-3bNvidia L4(24GB)
Vicuna 7B Delta V1.1lmsys/vicuna-7b-delta-v1.1modelzai/llm-vicuna-7bNvidia A100(40GB)
LLaMA 7Bdecapoda-research/llama-7b-hfmodelzai/llm-llama-7bNvidia A100(40GB)
ChatGLM 6B INT4THUDM/chatglm-6b-int4modelzai/llm-chatglm-6b-int4Nvidia T4(16GB)
ChatGLM 6BTHUDM/chatglm-6bmodelzai/llm-chatglm-6bNvidia L4(24GB)
Bloomz 560Mbigscience/bloomz-560mmodelzai/llm-bloomz-560mCPU
Bloomz 1.7Bbigscience/bloomz-1b7CPU
Bloomz 3Bbigscience/bloomz-3bNvidia L4(24GB)
Bloomz 7.1Bbigscience/bloomz-7b1Nvidia A100(40GB)

Use OpenAI python SDK

Then you can use the OpenAI python SDK to interact with the model:

import openai
openai.api_base="http://localhost:8000"
openai.api_key="any"

# create a chat completion
chat_completion = openai.ChatCompletion.create(model="any", messages=[{"role": "user", "content": "Hello world"}])

Integrate with Langchain

You could also integrate modelz-llm with langchain:

import openai
openai.api_base="http://localhost:8000"
openai.api_key="any"

from langchain.llms import OpenAI

llm = OpenAI()

llm.generate(prompts=["Could you please recommend some movies?"])

Deploy on Modelz

You could also deploy the modelz-llm directly on Modelz:

Supported APIs

Modelz LLM supports the following APIs for interacting with open source large language models:

Acknowledgements