Awesome
<a href="https://www.ycombinator.com/companies/laminar-ai"></a> <a href="https://x.com/lmnrai"></a> <a href="https://discord.gg/nNFUUDAKub"> </a>
Laminar
Laminar is an all-in-one open-source platform for engineering AI products. Trace, evaluate, label, and analyze LLM data.
- Tracing
- OpenTelemetry-based automatic tracing of common AI frameworks and SDKs (LangChain, OpenAI, Anthropic ...) with just 2 lines of code. (powered by amazing OpenLLMetry).
- Trace input/output, latency, cost, token count.
- Function tracing with
observe
decorator/wrapper. - Image tracing.
- Audio tracing coming soon.
- Evaluations
- Local offline evaluations. Run from code, terminal or as part of CI/CD.
- Online evaluations. Trigger hosted LLM-as-a-judge or Python script evaluators for each trace.
- Labels
- Simple UI for fast data labeling.
- Datasets
- Export production trace data to datasets.
- Run evals on hosted golden datasets.
- Index dataset and retrieve semantically-similar dynamic few-shot examples to improve your prompts. Coming very soon.
- Built for scale
- Written in Rust 🦀
- Traces are sent via gRPC, ensuring the best performance and lowest overhead.
- Modern Open-Source stack
- RabbitMQ for message queue, Postgres for data, Clickhouse for analytics. Qdrant for semantic similarity search and hybrid search.
- Fast and beautiful dashboards for traces / evaluations / labels. <img width="1506" alt="traces-2" src="https://github.com/user-attachments/assets/14d6eec9-cd0e-4c3e-b601-3d64c4c0c875">
Documentation
Check out full documentation here docs.lmnr.ai.
Getting started
The fastest and easiest way to get started is with our managed platform -> lmnr.ai
Self-hosting with Docker compose
For a quick start, clone the repo and start the services with docker compose:
git clone https://github.com/lmnr-ai/lmnr
cd lmnr
docker compose up -d
This will spin up a lightweight version of the stack with Postgres, app-server, and frontend. This is good for a quickstart or for lightweight usage. You can access the UI at http://localhost:3000 in your browser.
For production environment, we recommend using our managed platform or docker compose -f docker-compose-full.yml up -d
.
docker-compose-full.yml
is heavy but it will enable all the features.
- app-server – core Rust backend
- rabbitmq – message queue for reliable trace processing
- qdrant – vector database
- semantic-search-service – gRPC service for embedding text and storing/retrieving it from qdrant
- frontend – Next.js frontend and backend
- python-executor – gRPC service with lightweight Python sandbox that can run arbitrary code.
- postgres – Postgres database for all the application data
- clickhouse – columnar OLAP database for more efficient trace and label analytics
Contributing
For running and building Laminar locally, or to learn more about docker compose files, follow the guide in Contributing.
TS quickstart
First, create a project and generate a project API key. Then,
npm add @lmnr-ai/lmnr
It will install Laminar TS SDK and all instrumentation packages (OpenAI, Anthropic, LangChain ...)
To start tracing LLM calls just add
import { Laminar } from '@lmnr-ai/lmnr';
Laminar.initialize({ projectApiKey: process.env.LMNR_PROJECT_API_KEY });
To trace inputs / outputs of functions use observe
wrapper.
import { OpenAI } from 'openai';
import { observe } from '@lmnr-ai/lmnr';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const poemWriter = observe({name: 'poemWriter'}, async (topic) => {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: `write a poem about ${topic}` }],
});
return response.choices[0].message.content;
});
await poemWriter();
Python quickstart
First, create a project and generate a project API key. Then,
pip install --upgrade 'lmnr[all]'
It will install Laminar Python SDK and all instrumentation packages. See list of all instruments here
To start tracing LLM calls just add
from lmnr import Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")
To trace inputs / outputs of functions use @observe()
decorator.
import os
from openai import OpenAI
from lmnr import observe, Laminar
Laminar.initialize(project_api_key="<LMNR_PROJECT_API_KEY>")
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
@observe() # annotate all functions you want to trace
def poem_writer(topic):
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": f"write a poem about {topic}"},
],
)
poem = response.choices[0].message.content
return poem
if __name__ == "__main__":
print(poem_writer(topic="laminar flow"))
Running the code above will result in the following trace.
<img width="996" alt="Screenshot 2024-10-29 at 7 52 40 PM" src="https://github.com/user-attachments/assets/df141a62-b241-4e43-844f-52d94fe4ad67">Client libraries
To learn more about instrumenting your code, check out our client libraries:
<a href="https://www.npmjs.com/package/@lmnr-ai/lmnr"> </a> <a href="https://pypi.org/project/lmnr/"> </a>