Home

Awesome

<p align="center"> <a href="https://www.traceloop.com/openllmetry#gh-light-mode-only"> <img width="600" src="https://raw.githubusercontent.com/traceloop/openllmetry/main/img/logo-light.png"> </a> <a href="https://www.traceloop.com/openllmetry#gh-dark-mode-only"> <img width="600" src="https://raw.githubusercontent.com/traceloop/openllmetry/main/img/logo-dark.png"> </a> </p> <h1 align="center">For Go</h1> <p align="center"> <p align="center">Open-source observability for your LLM application</p> </p> <h4 align="center"> <a href="https://traceloop.com/docs/openllmetry/getting-started-go"><strong>Get started ยป</strong></a> <br /> <br /> <a href="https://traceloop.com/slack">Slack</a> | <a href="https://traceloop.com/docs/openllmetry/introduction">Docs</a> | <a href="https://www.traceloop.com">Website</a> </h4> <h4 align="center"> <a href="https://github.com/traceloop/go-openllmetry/blob/main/LICENSE"> <img src="https://img.shields.io/badge/license-Apache 2.0-blue.svg" alt="OpenLLMetry is released under the Apache-2.0 License"> </a> <a href="https://www.ycombinator.com/companies/traceloop"><img src="https://img.shields.io/website?color=%23f26522&down_message=Y%20Combinator&label=Backed&logo=ycombinator&style=flat-square&up_message=Y%20Combinator&url=https%3A%2F%2Fwww.ycombinator.com"></a> <a href="https://github.com/traceloop/go-openllmetry/blob/main/CONTRIBUTING.md"> <img src="https://img.shields.io/badge/PRs-Welcome-brightgreen" alt="PRs welcome!" /> </a> <a href="https://github.com/traceloop/go-openllmetry/issues"> <img src="https://img.shields.io/github/commit-activity/m/traceloop/go-openllmetry" alt="git commit activity" /> </a> <a href="https://traceloop.com/slack"> <img src="https://img.shields.io/badge/chat-on%20Slack-blueviolet" alt="Slack community channel" /> </a> <a href="https://twitter.com/traceloopdev"> <img src="https://img.shields.io/badge/follow-%40traceloopdev-1DA1F2?logo=twitter&style=social" alt="Traceloop Twitter" /> </a> </h4>

OpenLLMetry is a set of extensions built on top of OpenTelemetry that gives you complete observability over your LLM application. Because it uses OpenTelemetry under the hood, it can be connected to your existing observability solutions - Datadog, Honeycomb, and others.

It's built and maintained by Traceloop under the Apache 2.0 license.

The repo contains standard OpenTelemetry instrumentations for LLM providers and Vector DBs, as well as a Traceloop SDK that makes it easy to get started with OpenLLMetry, while still outputting standard OpenTelemetry data that can be connected to your observability stack. If you already have OpenTelemetry instrumented, you can just add any of our instrumentations directly.

๐Ÿš€ Getting Started

The easiest way to get started is to use our SDK. For a complete guide, go to our docs.

Install the SDK:

go get github.com/traceloop/go-openllmetry/traceloop-sdk

Then, initialize the SDK in your code:

package main

import (
	"context"

	sdk "github.com/traceloop/go-openllmetry/traceloop-sdk"
)

func main() {
    ctx := context.Background()

    traceloop := sdk.NewClient(ctx, sdk.Config{
		APIKey: os.Getenv("TRACELOOP_API_KEY"),
	})
	defer func() { traceloop.Shutdown(ctx) }()
}

That's it. You're now tracing your code with OpenLLMetry!

Now, you need to decide where to export the traces to.

โซ Supported (and tested) destinations

See our docs for instructions on connecting to each one.

๐Ÿช— What do we instrument?

OpenLLMetry is in early-alpha exploratory stage, and we're still figuring out what to instrument. As opposed to other languages, there aren't many official LLM libraries (yet?), so for now you'll have to manually log prompts:

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/sashabaranov/go-openai"
	sdk "github.com/traceloop/go-openllmetry/traceloop-sdk"
)

func main() {
	ctx := context.Background()

	// Initialize Traceloop
	traceloop := sdk.NewClient(ctx, config.Config{
		APIKey:  os.Getenv("TRACELOOP_API_KEY"),
	})
	defer func() { traceloop.Shutdown(ctx) }()

	// Call OpenAI like you normally would
	resp, err := client.CreateChatCompletion(
		context.Background(),
		openai.ChatCompletionRequest{
			Model: openai.GPT3Dot5Turbo,
			Messages: []openai.ChatCompletionMessage{
				{
					Role:    openai.ChatMessageRoleUser,
					Content: "Tell me a joke about OpenTelemetry!",
				},
			},
		},
	)

    var promptMsgs []sdk.Message
    for i, message := range request.Messages {
    	promptMsgs = append(promptMsgs, sdk.Message{
    		Index:   i,
    		Content: message.Content,
    		Role:    message.Role,
    	})
    }

	// Log the request
    llmSpan, err := traceloop.LogPrompt(
    	ctx,
    	sdk.Prompt{
    		Vendor: "openai",
    		Mode:   "chat",
    		Model: request.Model,
    		Messages: promptMsgs,
    	},
    	sdk.TraceloopAttributes{
    		WorkflowName: "example-workflow",
    		EntityName:   "example-entity",
    	},
    )
    if err != nil {
    	fmt.Printf("LogPrompt error: %v\n", err)
    	return
    }

    client := openai.NewClient(os.Getenv("OPENAI_API_KEY"))
    resp, err := client.CreateChatCompletion(
    	context.Background(),
    	*request,
    )
    if err != nil {
    	fmt.Printf("ChatCompletion error: %v\n", err)
    	return
    }

    var completionMsgs []sdk.Message
    for _, choice := range resp.Choices {
    	completionMsgs = append(completionMsgs, sdk.Message{
    		Index:   choice.Index,
    		Content: choice.Message.Content,
    		Role:    choice.Message.Role,
    	})
    }

	// Log the response
    llmSpan.LogCompletion(ctx, sdk.Completion{
    	Model:    resp.Model,
    	Messages: completionMsgs,
    }, sdk.Usage{
    	TotalTokens:       resp.Usage.TotalTokens,
    	CompletionTokens:  resp.Usage.CompletionTokens,
    	PromptTokens:      resp.Usage.PromptTokens,
    })
}

๐ŸŒฑ Contributing

Whether it's big or small, we love contributions โค๏ธ Check out our guide to see how to get started.

Not sure where to get started? You can:

๐Ÿ’š Community & Support

๐Ÿ™ Special Thanks

To @patrickdebois, who suggested the great name we're now using for this repo!