Home

Awesome

πŸͺ„πŸ“™πŸ”¨ Spellbook Forge

License: MIT Twitter

✨ Make your LLM prompts executable and version controlled. ✨

Quick Start

In your Express server:

yarn add spellbook-forge

import { spellbookForge } from "spellbook-forge";

const app = express()
  .use(spellbookForge({
    gitHost: 'https://github.com'
  }))

and then:

GET http://localhost:3000/your/repository/prompt?execute

<-- HTTP 200
{
  "prompt-content": "Complete this phrase in coders’ language: Hello …",
  "model": "gpt3.5",
  "result": "Hello, World!"
}

See live examples to try it out!

πŸ€” What is this?

This is an ExpressJS middleware that allows you to create an API interface for your LLM prompts. It will automatically generate a server for your prompts stored in a git repository. Using Spellbook, you can:

πŸ’‘ Note: It's an early version. Expect bugs, breaking changes and poor performance.

πŸš€ Try it

This prompt repository: https://github.com/rafalzawadzki/spellbook-prompts/hello-world

Can be executed like this: https://book.spell.so/rafalzawadzki/spellbook-prompts/hello-world?execute

➑️ Spellbook server: https://book.spell.so

The server uses spellbook-forge and is currently hooked up to Github as a git host. You can use any public repository with prompts in it (as long as they adhere to the accepted format).

For example, using a repository rafalzawadzki/spellbook-prompts, you can form an endpoint (and many more):

https://book.spell.so/rafalzawadzki/spellbook-prompts/hello-world

πŸ“– Documentation

πŸ’‘ Full documentation coming soon!

OpenAI key

If you want to use the execute query on your own spellbook-forge instance, you need to provide an OpenAI API key in .env file or env variables:

OPENAI_API_KEY=your-key

Main dependencies

  1. πŸ¦œπŸ”— LangChain.js
  2. simple-git

Prompt format

Prompt files must adhere to a specific format (JSON/YAML). See examples here here.

Example

β”œβ”€β”€ prompt1
β”‚   β”œβ”€β”€ prompt.json
β”‚   └── readme.md
└── collection
    └── prompt2
        β”œβ”€β”€ prompt.yaml
        └── readme.md

The above file structure will result in the following API endpoints being generated:

{host}/prompt1

{host}/collection/prompt2

Files

  1. prompt.json the main file with the prompt content and configuration.
  2. readme.md additional information about prompt usage, examples etc.

API

CRUD

Execution

// request body
{
  "variables": [
    "name": "World"
  ]
}

Using with LangChain

You can fetch the prompt content and execute it using LangChain:

import { PromptTemplate } from "langchain/prompts";

export const run = async () => {
  const template = await fetch(
          "https://book.spell.so/rafalzawadzki/spellbook-prompts/hello-world"
  ).then((res) => res.text());
  const prompt = new PromptTemplate({template, inputVariables: ["product"]})
  // do something with the prompt ...
}

The presented solution ofc makes sense mostly in chaining, for simple prompts it's best to just use Spellbook directly!

In the future I may contribute to extend the LangChain prompt/load function to support loading prompts from Spellbook, eg:

import { loadPrompt } from "langchain/prompts/load";
const prompt = await loadPrompt("{spellbook-host}/hello-world/prompt");

β˜‘οΈ Todo