Home

Awesome

<div align="center"> <a href="https://agentops.ai?ref=gh"> <img src="docs/images/external/logo/github-banner.png" alt="Logo"> </a> </div> <div align="center"> <em>Observability and DevTool platform for AI Agents</em> </div> <br /> <div align="center"> <a href="https://pepy.tech/project/agentops"> <img src="https://static.pepy.tech/badge/agentops/month" alt="Downloads"> </a> <a href="https://github.com/agentops-ai/agentops/issues"> <img src="https://img.shields.io/github/commit-activity/m/agentops-ai/agentops" alt="git commit activity"> </a> <img src="https://img.shields.io/pypi/v/agentops?&color=3670A0" alt="PyPI - Version"> <a href="https://opensource.org/licenses/MIT"> <img src="https://img.shields.io/badge/License-MIT-yellow.svg?&color=3670A0" alt="License: MIT"> </a> </div> <p align="center"> <a href="https://twitter.com/agentopsai/"> <img src="https://img.shields.io/twitter/follow/agentopsai?style=social" alt="Twitter" style="height: 20px;"> </a> <a href="https://discord.gg/FagdcwwXRR"> <img src="https://img.shields.io/badge/discord-7289da.svg?style=flat-square&logo=discord" alt="Discord" style="height: 20px;"> </a> <a href="https://app.agentops.ai/?ref=gh"> <img src="https://img.shields.io/badge/Dashboard-blue.svg?style=flat-square" alt="Dashboard" style="height: 20px;"> </a> <a href="https://docs.agentops.ai/introduction"> <img src="https://img.shields.io/badge/Documentation-orange.svg?style=flat-square" alt="Documentation" style="height: 20px;"> </a> <a href="https://entelligence.ai/AgentOps-AI&agentops"> <img src="https://img.shields.io/badge/Chat%20with%20Docs-green.svg?style=flat-square" alt="Chat with Docs" style="height: 20px;"> </a> </p> <div style="justify-content: center"> <img src="docs/images/external/app_screenshots/dashboard-banner.png" alt="Dashboard Banner"> </div> <br/>

AgentOps helps developers build, evaluate, and monitor AI agents. From prototype to production.

๐Ÿ“Š Replay Analytics and DebuggingStep-by-step agent execution graphs
๐Ÿ’ธ LLM Cost ManagementTrack spend with LLM foundation model providers
๐Ÿงช Agent BenchmarkingTest your agents against 1,000+ evals
๐Ÿ” Compliance and SecurityDetect common prompt injection and data exfiltration exploits
๐Ÿค Framework IntegrationsNative Integrations with CrewAI, AutoGen, & LangChain

Quick Start โŒจ๏ธ

pip install agentops

Session replays in 2 lines of code

Initialize the AgentOps client and automatically get analytics on all your LLM calls.

Get an API key

import agentops

# Beginning of your program (i.e. main.py, __init__.py)
agentops.init( < INSERT YOUR API KEY HERE >)

...

# End of program
agentops.end_session('Success')

All your sessions can be viewed on the AgentOps dashboard <br/>

<details> <summary>Agent Debugging</summary> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/session-drilldown-metadata.png" style="width: 90%;" alt="Agent Metadata"/> </a> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/chat-viewer.png" style="width: 90%;" alt="Chat Viewer"/> </a> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/session-drilldown-graphs.png" style="width: 90%;" alt="Event Graphs"/> </a> </details> <details> <summary>Session Replays</summary> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/session-replay.png" style="width: 90%;" alt="Session Replays"/> </a> </details> <details open> <summary>Summary Analytics</summary> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/overview.png" style="width: 90%;" alt="Summary Analytics"/> </a> <a href="https://app.agentops.ai?ref=gh"> <img src="docs/images/external/app_screenshots/overview-charts.png" style="width: 90%;" alt="Summary Analytics Charts"/> </a> </details>

First class Developer Experience

Add powerful observability to your agents, tools, and functions with as little code as possible: one line at a time. <br/> Refer to our documentation

# Automatically associate all Events with the agent that originated them
from agentops import track_agent

@track_agent(name='SomeCustomName')
class MyAgent:
  ...
# Automatically create ToolEvents for tools that agents will use
from agentops import record_tool

@record_tool('SampleToolName')
def sample_tool(...):
  ...
# Automatically create ActionEvents for other functions.
from agentops import record_action

@agentops.record_action('sample function being record')
def sample_function(...):
  ...
# Manually record any other Events
from agentops import record, ActionEvent

record(ActionEvent("received_user_input"))

Integrations ๐Ÿฆพ

CrewAI ๐Ÿ›ถ

Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.

pip install 'crewai[agentops]'

AutoGen ๐Ÿค–

With only two lines of code, add full observability and monitoring to Autogen agents. Set an AGENTOPS_API_KEY in your environment and call agentops.init()

Langchain ๐Ÿฆœ๐Ÿ”—

AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:

<details> <summary>Installation</summary>
pip install agentops[langchain]

To use the handler, import and set

import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.partners.langchain_callback_handler import LangchainCallbackHandler

AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])

llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
                 callbacks=[handler],
                 model='gpt-3.5-turbo')

agent = initialize_agent(tools,
                         llm,
                         agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
                         verbose=True,
                         callbacks=[handler], # You must pass in a callback handler to record your agent
                         handle_parsing_errors=True)

Check out the Langchain Examples Notebook for more details including Async handlers.

</details>

Cohere โŒจ๏ธ

First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!

<details> <summary>Installation</summary>
pip install cohere
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()

chat = co.chat(
    message="Is it pronounced ceaux-hear or co-hehray?"
)

print(chat)

agentops.end_session('Success')
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

co = cohere.Client()

stream = co.chat_stream(
    message="Write me a haiku about the synergies between Cohere and AgentOps"
)

for event in stream:
    if event.event_type == "text-generation":
        print(event.text, end='')

agentops.end_session('Success')
</details>

Anthropic ๏นจ

Track agents built with the Anthropic Python SDK (>=0.32.0).

<details> <summary>Installation</summary>
pip install anthropic
import anthropic
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = anthropic.Anthropic(
    # This is the default and can be omitted
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
)

message = client.messages.create(
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": "Tell me a cool fact about AgentOps",
            }
        ],
        model="claude-3-opus-20240229",
    )
print(message.content)

agentops.end_session('Success')

Streaming

import anthropic
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = anthropic.Anthropic(
    # This is the default and can be omitted
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
)

stream = client.messages.create(
    max_tokens=1024,
    model="claude-3-opus-20240229",
    messages=[
        {
            "role": "user",
            "content": "Tell me something cool about streaming agents",
        }
    ],
    stream=True,
)

response = ""
for event in stream:
    if event.type == "content_block_delta":
        response += event.delta.text
    elif event.type == "message_stop":
        print("\n")
        print(response)
        print("\n")

Async

import asyncio
from anthropic import AsyncAnthropic

client = AsyncAnthropic(
    # This is the default and can be omitted
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
)


async def main() -> None:
    message = await client.messages.create(
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": "Tell me something interesting about async agents",
            }
        ],
        model="claude-3-opus-20240229",
    )
    print(message.content)


await main()
</details>

Mistral ใ€ฝ๏ธ

Track agents built with the Anthropic Python SDK (>=0.32.0).

<details> <summary>Installation</summary>
pip install mistralai

Sync

from mistralai import Mistral
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = Mistral(
    # This is the default and can be omitted
    api_key=os.environ.get("MISTRAL_API_KEY"),
)

message = client.chat.complete(
        messages=[
            {
                "role": "user",
                "content": "Tell me a cool fact about AgentOps",
            }
        ],
        model="open-mistral-nemo",
    )
print(message.choices[0].message.content)

agentops.end_session('Success')

Streaming

from mistralai import Mistral
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = Mistral(
    # This is the default and can be omitted
    api_key=os.environ.get("MISTRAL_API_KEY"),
)

message = client.chat.stream(
        messages=[
            {
                "role": "user",
                "content": "Tell me something cool about streaming agents",
            }
        ],
        model="open-mistral-nemo",
    )

response = ""
for event in message:
    if event.data.choices[0].finish_reason == "stop":
        print("\n")
        print(response)
        print("\n")
    else:
        response += event.text

agentops.end_session('Success')

Async

import asyncio
from mistralai import Mistral

client = Mistral(
    # This is the default and can be omitted
    api_key=os.environ.get("MISTRAL_API_KEY"),
)


async def main() -> None:
    message = await client.chat.complete_async(
        messages=[
            {
                "role": "user",
                "content": "Tell me something interesting about async agents",
            }
        ],
        model="open-mistral-nemo",
    )
    print(message.choices[0].message.content)


await main()

Async Streaming

import asyncio
from mistralai import Mistral

client = Mistral(
    # This is the default and can be omitted
    api_key=os.environ.get("MISTRAL_API_KEY"),
)


async def main() -> None:
    message = await client.chat.stream_async(
        messages=[
            {
                "role": "user",
                "content": "Tell me something interesting about async streaming agents",
            }
        ],
        model="open-mistral-nemo",
    )

    response = ""
    async for event in message:
        if event.data.choices[0].finish_reason == "stop":
            print("\n")
            print(response)
            print("\n")
        else:
            response += event.text


await main()
</details>

LiteLLM ๐Ÿš…

AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.

<details> <summary>Installation</summary>
pip install litellm
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)

# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)
</details>

LlamaIndex ๐Ÿฆ™

AgentOps works seamlessly with applications built using LlamaIndex, a framework for building context-augmented generative AI applications with LLMs.

<details> <summary>Installation</summary>
pip install llama-index-instrumentation-agentops

To use the handler, import and set

from llama_index.core import set_global_handler

# NOTE: Feel free to set your AgentOps environment variables (e.g., 'AGENTOPS_API_KEY')
# as outlined in the AgentOps documentation, or pass the equivalent keyword arguments
# anticipated by AgentOps' AOClient as **eval_params in set_global_handler.

set_global_handler("agentops")

Check out the LlamaIndex docs for more details.

</details>

Time travel debugging ๐Ÿ”ฎ

<div style="justify-content: center"> <img src="docs/images/external/app_screenshots/time_travel_banner.png" alt="Time Travel Banner"> </div> <br />

Try it out!

Agent Arena ๐ŸฅŠ

(coming soon!)

Evaluations Roadmap ๐Ÿงญ

PlatformDashboardEvals
โœ… Python SDKโœ… Multi-session and Cross-session metricsโœ… Custom eval metrics
๐Ÿšง Evaluation builder APIโœ… Custom event tag trackingย ๐Ÿ”œ Agent scorecards
โœ… Javascript/Typescript SDKโœ… Session replays๐Ÿ”œ Evaluation playground + leaderboard

Debugging Roadmap ๐Ÿงญ

Performance testingEnvironmentsLLM TestingReasoning and execution testing
โœ… Event latency analysis๐Ÿ”œ Non-stationary environment testing๐Ÿ”œ LLM non-deterministic function detection๐Ÿšง Infinite loops and recursive thought detection
โœ… Agent workflow execution pricing๐Ÿ”œ Multi-modal environments๐Ÿšง Token limit overflow flags๐Ÿ”œ Faulty reasoning detection
๐Ÿšง Success validators (external)๐Ÿ”œ Execution containers๐Ÿ”œ Context limit overflow flags๐Ÿ”œ Generative code validators
๐Ÿ”œ Agent controllers/skill testsโœ… Honeypot and prompt injection detection (PromptArmor)๐Ÿ”œ API bill tracking๐Ÿ”œ Error breakpoint analysis
๐Ÿ”œ Information context constraint testing๐Ÿ”œ Anti-agent roadblocks (i.e. Captchas)๐Ÿ”œ CI/CD integration checks
๐Ÿ”œ Regression testing๐Ÿ”œ Multi-agent framework visualization

Why AgentOps? ๐Ÿค”

Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:

AgentOps is designed to make agent observability, testing, and monitoring easy.

Star History

Check out our growth in the community:

<img src="https://api.star-history.com/svg?repos=AgentOps-AI/agentops&type=Date" style="max-width: 500px" width="50%" alt="Logo">

Popular projects using AgentOps

RepositoryStars
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/2707039?s=40&v=4" width="20" height="20" alt=""> ย  geekan / MetaGPT42787
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/130722866?s=40&v=4" width="20" height="20" alt=""> ย  run-llama / llama_index34446
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/170677839?s=40&v=4" width="20" height="20" alt=""> ย  crewAIInc / crewAI18287
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/134388954?s=40&v=4" width="20" height="20" alt=""> ย  camel-ai / camel5166
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/152537519?s=40&v=4" width="20" height="20" alt=""> ย  superagent-ai / superagent5050
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/30197649?s=40&v=4" width="20" height="20" alt=""> ย  iyaja / llama-fs4713
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/162546372?s=40&v=4" width="20" height="20" alt=""> ย  BasedHardware / Omi2723
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/454862?s=40&v=4" width="20" height="20" alt=""> ย  MervinPraison / PraisonAI2007
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/140554352?s=40&v=4" width="20" height="20" alt=""> ย  AgentOps-AI / Jaiqu272
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/3074263?s=40&v=4" width="20" height="20" alt=""> ย  strnad / CrewAI-Studio134
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/18406448?s=40&v=4" width="20" height="20" alt=""> ย  alejandro-ao / exa-crewai55
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/64493665?s=40&v=4" width="20" height="20" alt=""> ย  tonykipkemboi / youtube_yapper_trapper47
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/17598928?s=40&v=4" width="20" height="20" alt=""> ย  sethcoast / cover-letter-builder27
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/109994880?s=40&v=4" width="20" height="20" alt=""> ย  bhancockio / chatgpt4o-analysis19
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/14105911?s=40&v=4" width="20" height="20" alt=""> ย  breakstring / Agentic_Story_Book_Workflow14
<img class="avatar mr-2" src="https://avatars.githubusercontent.com/u/124134656?s=40&v=4" width="20" height="20" alt=""> ย  MULTI-ON / multion-python13

Generated using github-dependents-info, by Nicolas Vuillamy