Home

Awesome

<img src="https://github.com/user-attachments/assets/2fedfe0f-6df7-4441-98b2-87a1fd95ee1c" width="300" title="Llama Stack Logo" alt="Llama Stack Logo"/>

Llama Stack

PyPI version PyPI - Downloads Discord

Get Started | Documentation

This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.

The Llama Stack defines and standardizes the building blocks needed to bring generative AI applications to market. These blocks span the entire development lifecycle: from model training and fine-tuning, through product evaluation, to building and running AI agents in production. Beyond definition, we are building providers for the Llama Stack APIs. These were developing open-source versions and partnering with providers, ensuring developers can assemble AI solutions using consistent, interlocking pieces across platforms. The ultimate goal is to accelerate innovation in the AI space.

The Stack APIs are rapidly improving, but still very much work in progress and we invite feedback as well as direct contributions.

APIs

The Llama Stack consists of the following set of APIs:

Each of the APIs themselves is a collection of REST endpoints.

API Providers

A Provider is what makes the API real -- they provide the actual implementation backing the API.

As an example, for Inference, we could have the implementation be backed by open source libraries like [ torch | vLLM | TensorRT ] as possible options.

A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.

Llama Stack Distribution

A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.

Supported Llama Stack Implementations

API Providers

API Provider BuilderEnvironmentsAgentsInferenceMemorySafetyTelemetry
Meta ReferenceSingle Node:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:
FireworksHosted:heavy_check_mark::heavy_check_mark::heavy_check_mark:
AWS BedrockHosted:heavy_check_mark::heavy_check_mark:
TogetherHosted:heavy_check_mark::heavy_check_mark::heavy_check_mark:
OllamaSingle Node:heavy_check_mark:
TGIHosted and Single Node:heavy_check_mark:
ChromaSingle Node:heavy_check_mark:
PG VectorSingle Node:heavy_check_mark:
PyTorch ExecuTorchOn-device iOS:heavy_check_mark::heavy_check_mark:

Distributions

DistributionLlama Stack DockerStart This DistributionInferenceAgentsMemorySafetyTelemetry
Meta Referencellamastack/distribution-meta-reference-gpuGuidemeta-referencemeta-referencemeta-reference; remote::pgvector; remote::chromadbmeta-referencemeta-reference
Meta Reference Quantizedllamastack/distribution-meta-reference-quantized-gpuGuidemeta-reference-quantizedmeta-referencemeta-reference; remote::pgvector; remote::chromadbmeta-referencemeta-reference
Ollamallamastack/distribution-ollamaGuideremote::ollamameta-referenceremote::pgvector; remote::chromadbmeta-referencemeta-reference
TGIllamastack/distribution-tgiGuideremote::tgimeta-referencemeta-reference; remote::pgvector; remote::chromadbmeta-referencemeta-reference
Togetherllamastack/distribution-togetherGuideremote::togethermeta-referenceremote::weaviatemeta-referencemeta-reference
Fireworksllamastack/distribution-fireworksGuideremote::fireworksmeta-referenceremote::weaviatemeta-referencemeta-reference

Installation

You have two ways to install this repository:

  1. Install as a package: You can install the repository directly from PyPI by running the following command:

    pip install llama-stack
    
  2. Install from source: If you prefer to install from the source code, follow these steps:

     mkdir -p ~/local
     cd ~/local
     git clone git@github.com:meta-llama/llama-stack.git
    
     conda create -n stack python=3.10
     conda activate stack
    
     cd llama-stack
     $CONDA_PREFIX/bin/pip install -e .
    

Documentations

Please checkout our Documentations page for more details.

Llama Stack Client SDK

LanguageClient SDKPackage
Pythonllama-stack-client-pythonPyPI version
Swiftllama-stack-client-swiftSwift Package Index
Nodellama-stack-client-nodeNPM version
Kotlinllama-stack-client-kotlinMaven version

Check out our client SDKs for connecting to Llama Stack server in your preferred language, you can choose from python, node, swift, and kotlin programming languages to quickly build your applications.

You can find more example scripts with client SDKs to talk with the Llama Stack server in our llama-stack-apps repo.