Home

Awesome

<h1 align="center"> šŸš… LiteLLM </h1> <p align="center"> <p align="center"> <a href="https://render.com/deploy?repo=https://github.com/BerriAI/litellm" target="_blank" rel="nofollow"><img src="https://render.com/images/deploy-to-render-button.svg" alt="Deploy to Render"></a> <a href="https://railway.app/template/HLP0Ub?referralCode=jch2ME"> <img src="https://railway.app/button.svg" alt="Deploy on Railway"> </a> </p> <p align="center">Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, Groq etc.] <br> </p> <h4 align="center"><a href="https://docs.litellm.ai/docs/simple_proxy" target="_blank">LiteLLM Proxy Server (LLM Gateway)</a> | <a href="https://docs.litellm.ai/docs/hosted" target="_blank"> Hosted Proxy (Preview)</a> | <a href="https://docs.litellm.ai/docs/enterprise"target="_blank">Enterprise Tier</a></h4> <h4 align="center"> <a href="https://pypi.org/project/litellm/" target="_blank"> <img src="https://img.shields.io/pypi/v/litellm.svg" alt="PyPI Version"> </a> <a href="https://dl.circleci.com/status-badge/redirect/gh/BerriAI/litellm/tree/main" target="_blank"> <img src="https://dl.circleci.com/status-badge/img/gh/BerriAI/litellm/tree/main.svg?style=svg" alt="CircleCI"> </a> <a href="https://www.ycombinator.com/companies/berriai"> <img src="https://img.shields.io/badge/Y%20Combinator-W23-orange?style=flat-square" alt="Y Combinator W23"> </a> <a href="https://wa.link/huol9n"> <img src="https://img.shields.io/static/v1?label=Chat%20on&message=WhatsApp&color=success&logo=WhatsApp&style=flat-square" alt="Whatsapp"> </a> <a href="https://discord.gg/wuPM9dRgDw"> <img src="https://img.shields.io/static/v1?label=Chat%20on&message=Discord&color=blue&logo=Discord&style=flat-square" alt="Discord"> </a> </h4>

LiteLLM manages:

Jump to LiteLLM Proxy (LLM Gateway) Docs <br> Jump to Supported LLM Providers

šŸšØ Stable Release: Use docker images with the -stable tag. These have undergone 12 hour load tests, before being published.

Support for more providers. Missing a provider or LLM Platform, raise a feature request.

Usage (Docs)

[!IMPORTANT] LiteLLM v1.0.0 now requires openai>=1.0.0. Migration guide here
LiteLLM v1.40.14+ now requires pydantic>=2.0.0. No changes required.

<a target="_blank" href="https://colab.research.google.com/github/BerriAI/litellm/blob/main/cookbook/liteLLM_Getting_Started.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>
pip install litellm
from litellm import completion
import os

## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
print(response)

Call any model supported by a provider, with model=<provider_name>/<model_name>. There might be provider-specific details here, so refer to provider docs for more information

Async (Docs)

from litellm import acompletion
import asyncio

async def test_get_response():
    user_message = "Hello, how are you?"
    messages = [{"content": user_message, "role": "user"}]
    response = await acompletion(model="gpt-3.5-turbo", messages=messages)
    return response

response = asyncio.run(test_get_response())
print(response)

Streaming (Docs)

liteLLM supports streaming the model response back, pass stream=True to get a streaming iterator in response.
Streaming is supported for all models (Bedrock, Huggingface, TogetherAI, Azure, OpenAI, etc.)

from litellm import completion
response = completion(model="gpt-3.5-turbo", messages=messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

# claude 2
response = completion('claude-2', messages, stream=True)
for part in response:
    print(part.choices[0].delta.content or "")

Logging Observability (Docs)

LiteLLM exposes pre defined callbacks to send data to Lunary, Langfuse, DynamoDB, s3 Buckets, Helicone, Promptlayer, Traceloop, Athina, Slack

from litellm import completion

## set env variables for logging tools
os.environ["LUNARY_PUBLIC_KEY"] = "your-lunary-public-key"
os.environ["HELICONE_API_KEY"] = "your-helicone-auth-key"
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["ATHINA_API_KEY"] = "your-athina-api-key"

os.environ["OPENAI_API_KEY"]

# set callbacks
litellm.success_callback = ["lunary", "langfuse", "athina", "helicone"] # log input/output to lunary, langfuse, supabase, athina, helicone etc

#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi šŸ‘‹ - i'm openai"}])

LiteLLM Proxy Server (LLM Gateway) - (Docs)

Track spend + Load Balance across multiple projects

Hosted Proxy (Preview)

The proxy provides:

  1. Hooks for auth
  2. Hooks for logging
  3. Cost tracking
  4. Rate Limiting

šŸ“– Proxy Endpoints - Swagger Docs

Quick Start Proxy - CLI

pip install 'litellm[proxy]'

Step 1: Start litellm proxy

$ litellm --model huggingface/bigcode/starcoder

#INFO: Proxy running on http://0.0.0.0:4000

Step 2: Make ChatCompletions Request to Proxy

[!IMPORTANT] šŸ’” Use LiteLLM Proxy with Langchain (Python, JS), OpenAI SDK (Python, JS) Anthropic SDK, Mistral SDK, LlamaIndex, Instructor, Curl

import openai # openai v1.0.0+
client = openai.OpenAI(api_key="anything",base_url="http://0.0.0.0:4000") # set proxy to base_url
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
    {
        "role": "user",
        "content": "this is a test request, write a short poem"
    }
])

print(response)

Proxy Key Management (Docs)

Connect the proxy with a Postgres DB to create proxy keys

# Get the code
git clone https://github.com/BerriAI/litellm

# Go to folder
cd litellm

# Add the master key - you can change this after setup
echo 'LITELLM_MASTER_KEY="sk-1234"' > .env

# Add the litellm salt key - you cannot change this after adding a model
# It is used to encrypt / decrypt your LLM API Key credentials
# We recommned - https://1password.com/password-generator/ 
# password generator to get a random hash for litellm salt key
echo 'LITELLM_SALT_KEY="sk-1234"' > .env

source .env

# Start
docker-compose up

UI on /ui on your proxy server ui_3

Set budgets and rate limits across multiple projects POST /key/generate

Request

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gpt-3.5-turbo", "gpt-4", "claude-2"], "duration": "20m","metadata": {"user": "ishaan@berri.ai", "team": "core-infra"}}'

Expected Response

{
    "key": "sk-kdEXbIqZRwEeEiHwdg7sFA", # Bearer token
    "expires": "2023-11-19T01:38:25.838000+00:00" # datetime object
}

Supported Providers (Docs)

ProviderCompletionStreamingAsync CompletionAsync StreamingAsync EmbeddingAsync Image Generation
openaiāœ…āœ…āœ…āœ…āœ…āœ…
azureāœ…āœ…āœ…āœ…āœ…āœ…
aws - sagemakerāœ…āœ…āœ…āœ…āœ…
aws - bedrockāœ…āœ…āœ…āœ…āœ…
google - vertex_aiāœ…āœ…āœ…āœ…āœ…āœ…
google - palmāœ…āœ…āœ…āœ…
google AI Studio - geminiāœ…āœ…āœ…āœ…
mistral ai apiāœ…āœ…āœ…āœ…āœ…
cloudflare AI Workersāœ…āœ…āœ…āœ…
cohereāœ…āœ…āœ…āœ…āœ…
anthropicāœ…āœ…āœ…āœ…
empowerāœ…āœ…āœ…āœ…
huggingfaceāœ…āœ…āœ…āœ…āœ…
replicateāœ…āœ…āœ…āœ…
together_aiāœ…āœ…āœ…āœ…
openrouterāœ…āœ…āœ…āœ…
ai21āœ…āœ…āœ…āœ…
basetenāœ…āœ…āœ…āœ…
vllmāœ…āœ…āœ…āœ…
nlp_cloudāœ…āœ…āœ…āœ…
aleph alphaāœ…āœ…āœ…āœ…
petalsāœ…āœ…āœ…āœ…
ollamaāœ…āœ…āœ…āœ…āœ…
deepinfraāœ…āœ…āœ…āœ…
perplexity-aiāœ…āœ…āœ…āœ…
Groq AIāœ…āœ…āœ…āœ…
Deepseekāœ…āœ…āœ…āœ…
anyscaleāœ…āœ…āœ…āœ…
IBM - watsonx.aiāœ…āœ…āœ…āœ…āœ…
voyage aiāœ…
xinference [Xorbits Inference]āœ…
FriendliAIāœ…āœ…āœ…āœ…

Read the Docs

Contributing

To contribute: Clone the repo locally -> Make a change -> Submit a PR with the change.

Here's how to modify the repo locally: Step 1: Clone the repo

git clone https://github.com/BerriAI/litellm.git

Step 2: Navigate into the project, and install dependencies:

cd litellm
poetry install -E extra_proxy -E proxy

Step 3: Test your change:

cd litellm/tests # pwd: Documents/litellm/litellm/tests
poetry run flake8
poetry run pytest .

Step 4: Submit a PR with your changes! šŸš€

Enterprise

For companies that need better security, user management and professional support

Talk to founders

This covers:

Support / talk with founders

Why did we build this

Contributors

<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore-start --> <!-- markdownlint-disable --> <!-- markdownlint-restore --> <!-- prettier-ignore-end --> <!-- ALL-CONTRIBUTORS-LIST:END --> <a href="https://github.com/BerriAI/litellm/graphs/contributors"> <img src="https://contrib.rocks/image?repo=BerriAI/litellm" /> </a>