Home

Awesome

<p align="center"> <img src="https://raw.githubusercontent.com/reworkd/bananalyzer/main/.github/assets/banner.png" height="300" alt="Monkey Looking at banana" /> </p> <p align="center"> <em>🍌 Open source AI Agent evaluations for web tasks 🍌</em> </p> <p align="center"> <img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" /> </p> <p align="center"> <a href="https://reworkd.ai/">🔗 Main site</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://twitter.com/reworkdai">🐦 Twitter</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://discord.gg/gcmNyAAFfV">📢 Discord</a> </p>

Banana-lyzer

Introduction

Banana-lyzer is an open source AI Agent evaluation framework and dataset for web tasks with Playwright (And has a banana theme because why not). We've created our own evals repo because:

https://github.com/reworkd/bananalyzer/assets/50181239/4587615c-a5b4-472d-bca9-334594130af1

How does it work?

⚠️ Note that this repo is a work in progress. ⚠️

Banana-lyzer is a CLI tool that runs a set of evaluations against a set of example websites. The examples are defined in examples.json using a schema similar to Mind2Web and WebArena. The examples store metadata like the agent goal and the expected agent output in addition to snapshots of urls via mhtml to ensure the page is not changed over time. Note all examples today expect structured JSON output using data directly extracted from the page.

The CLI tool will sequentially run examples against a user defined agent by dynamically constructing a pytest test suite and executing it. As a user, you simply create a file that implements the AgentRunner interface and defines an instance of your AgentRunner in a variable called "agent". AgentRunner exposes the example, and a playwright browser context to use.

In the future we will support more complex evaluation methods and examples that require multiple steps to complete. The plan is to translate existing datasets like Mind2Web and WebArena into this format.

Test intents

We have defined a set of page types and test intents an agent can be evaluated on. These types are defined in the ExampleType enum in schemas.py.

Separately, there are specific tags that can be used further filter test intents

Getting Started

Local testing installation

import asyncio
from playwright.async_api import BrowserContext
from bananalyzer.data.schemas import Example
from bananalyzer.runner.agent_runner import AgentResult, AgentRunner


class NullAgentRunner(AgentRunner):
    """
    A test agent class that just returns an empty string
    """

    async def run(
        self,
        context: BrowserContext,
        example: Example,
    ) -> AgentResult:
        page = await context.new_page()
        await page.goto(
            example.get_static_url())  # example.url has the real url, example.get_static_url() returns the local mhtml file url
        await asyncio.sleep(0.5)
        return example.evals[0].expected  # Just return expected output directly so that tests pass

Arguments

Contributing

Running the server

The project has a basic FastAPI server to expose example data. You can run it with the following command:

cd server
poetry run uvicorn server:app --reload

Then travel to http://127.0.0.1:8000/api/docs in your browser to see the API docs.

Adding examples

All current examples have been manually added through running the fetch.ipynb notebook at the root of this project. This notebook will load a site with Playwright and use the chrome developer API to save the page as an MHTML file.

Roadmap

Launch
Features
Dataset updates

Citations

bibtex
@misc{reworkd2023bananalyzer,
  title        = {Bananalyzer},
  author       = {Asim Shrestha and Adam Watkins and Rohan Pandey and Srijan Subedi and Sunshine},
  year         = {2023},
  howpublished = {GitHub},
  url          = {https://github.com/reworkd/bananalyzer}
}