Awesome
<div align="center"> <img src="docs/logo/glide_no_bgd.png" width="300px" alt="Glide GH Header" /> <h1>Glide: Cloud-Native LLM Gateway for Seamless LLMOps</h1> <a href="https://codecov.io/github/EinStack/glide"><img src="https://codecov.io/github/EinStack/glide/graph/badge.svg?token=F7JT39RHX9" alt="CodeCov" /></a> <a href="https://discord.gg/pt53Ej7rrc"><img src="https://img.shields.io/discord/1181281407813828710" alt="Discord" /></a> <a href="https://docs.einstack.ai/glide/"><img src="https://img.shields.io/badge/build-view-violet%20?style=flat&logo=books&label=docs&link=https%3A%2F%2Fglide.einstack.ai%2F" alt="Glide Docs" /></a> <a href="https://github.com/EinStack/glide/blob/main/LICENSE"><img src="https://img.shields.io/github/license/EinStack/glide.svg?style=flat-square&color=%233f90c8" alt="License" /></a> <a href="https://artifacthub.io/packages/helm/einstack/glide"><img src="https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/einstack" alt="ArtifactHub" /></a> <a href="https://app.fossa.com/projects/git%2Bgithub.com%2FEinStack%2Fglide?ref=badge_shield"><img src="https://app.fossa.com/api/projects/git%2Bgithub.com%2FEinStack%2Fglide.svg?type=shield" alt="FOSSA Status" /></a> </div>Glide is your go-to cloud-native LLM gateway, delivering high-performance LLMOps in a lightweight, all-in-one package.
We take all problems of managing and communicating with external providers out of your applications, so you can dive into tackling your core challenges.
[!Important] Glide is under active development right now π οΈ
Give us a starβ to support the project and watchπ our repositories not to miss any update. Appreciate your interest π
Glide sits between your application and model providers to seamlessly handle various LLMOps tasks like model failover, caching, key management, etc.
<img src="docs/images/marketecture.svg" />Check out our documentation!
Features
- Unified REST API across providers. Avoid vendor lock-in and changes in your applications when you swap model providers.
- High availability and resiliency when working with external model providers. Automatic fallbacks on provider failures, rate limits, transient errors. Smart retries to reduce communication latency.
- Support popular LLM providers.
- High performance. Performance is our priority. We want to keep Glide "invisible" for your latency-wise, while providing rich functionality.
- Production-ready observability via OpenTelemetry, emit metrics on models health, allows whitebox monitoring (coming soon)
- Straightforward and simple maintenance and configuration, centralized API key control & management & rotation, etc.
Large Language Models
Provider | Supported Capabilities |
---|---|
<img src="docs/images/openai.svg" width="18" /> OpenAI | β Chat <br/> β Streaming Chat |
<img src="docs/images/anthropic.svg" width="18" /> Anthropic | β Chat<br/>ποΈ Streaming Chat (coming soon) |
<img src="docs/images/azure.svg" width="18" /> Azure OpenAI | β Chat<br/> β Streaming Chat |
<img src="docs/images/aws-icon.png" width="18" /> AWS Bedrock (Titan) | β Chat |
<img src="docs/images/cohere.png" width="18" /> Cohere | β Chat<br/> β Streaming Chat |
<img src="docs/images/bard.svg" width="18" /> Google Gemini | ποΈ Chat (coming soon) |
<img src="docs/images/octo.png" width="18" /> OctoML | β Chat |
<img src="docs/images/ollama.png" width="18" /> Ollama | β Chat |
Get Started
Installation
[!Note] Windows users should follow an instruction right from the demo README file that specifies how to do the steps without the
make
command as Windows doesn't come with it by default.
The easiest way to deploy Glide is to our demo repository and docker-compose.
1. Clone the demo repository
git clone https://github.com/EinStack/glide-demo.git
2. Init Configs
The demo repository comes with a basic config. Additionally, you need to init your secrets by running:
make init # from the demo root
This will create the secrets
directory with one .OPENAI_API_KEY
file that you need to put your key to.
3. Start Glide
After that, just use docker compose via this command to start your demo environment:
make up
4. Sample API Request to /chat
endpoint
See API Reference for more details.
{
"message":
{
"role": "user",
"content": "Where was it played?"
},
"message_history": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}
]
}
API Docs
Finally, Glide comes with OpenAPI documentation that is accessible via http://127.0.0.1:9099/v1/swagger
That's it π
Use our documentation to further learn about Glide capabilities and configs.
Other ways to install Glide are available:
Homebrew (MacOS)
brew tap einstack/tap
brew install einstack/tap/glide
Snapcraft (Linux)
snap install glide
To upgrade the already installed package, you just need to run:
snap refresh glide
Detailed instruction on Snapcraft installation for different Linux distos:
- Arch
- CentOS
- Debian
- elementaryOS
- Fedora
- KDE Neon
- Kubuntu
- Manjaro
- Pop! OS
- openSUSE
- RHEL
- Ubuntu
- Raspberry Pi
Docker Images
Glide provides official images in our GHCR & DockerHub:
- Alpine 3.19:
docker pull ghcr.io/einstack/glide:latest-alpine
- Ubuntu 22.04 LTS:
docker pull ghcr.io/einstack/glide:latest-ubuntu
- Google Distroless (non-root)
docker pull ghcr.io/einstack/glide:latest-distroless
- RedHat UBI 8.9 Micro
docker pull ghcr.io/einstack/glide:latest-redhat
Helm Chart
Add the EinStack repository:
helm repo add einstack https://einstack.github.io/charts
helm repo update
Before installing the Helm chart, you need to create a Kubernetes secret with your API keys like:
kubectl create secret generic api-keys --from-literal=OPENAI_API_KEY=sk-abcdXYZ
Then, you need to create a custom values.yaml file to override the secret name like:
# save as custom.values.yaml, for example
glide:
apiKeySecret: "api-keys"
Finally, you should be able to install Glide's chart via:
helm upgrade glide-gateway einstack/glide --values custom.values.yaml --install
SDKs
To let you work with Glide's API with ease, we are going to provide you with SDKs that fits your tech stack:
- Python (coming soon)
- NodeJS (coming soon)
Core Concepts
Routers
Routers are a core functionality of Glide. Think of routers as a group of models with some predefined logic. For example, the resilience router allows a user to define a set of backup models should the initial model fail. Another example, would be to leverage the least-latency router to make latency sensitive LLM calls in the most efficient manner.
Detailed info on routers can be found here.
Available Routers
Router | Description |
---|---|
Priority | When the target model fails the request is sent to the secondary model. The entire service instance keeps track of the number of failures for a specific model reducing latency upon model failure |
Least Latency | This router selects the model with the lowest average latency over time. If the least latency model becomes unhealthy, it will pick the second the best, etc. |
Round Robin | Split traffic equally among specified models. Great for A/B testing. |
Weighted Round Robin | Split traffic based on weights. For example, 70% of traffic to Model A and 30% of traffic to Model B. |
Community
- Join Discord for real-time discussion
Open an issue or start a discussion if there is a feature or an enhancement you'd like to see in Glide.
Contribute
-
Maintainers
- Roman Hlushko, Software Engineer, Distributed Systems & MLOps
- Max Krueger, Data & ML Engineer
Thanks everyone for already put their effort to make Glide better and more feature-rich:
<a href="https://github.com/EinStack/glide/graphs/contributors"> <img src="https://contributors-img.web.app/image?repo=modelgateway/glide" /> </a>License
Apache 2.0