Home

Awesome

Logo with chat chain links Elixir LangChain

Elixir LangChain enables Elixir applications to integrate AI services and self-hosted models into an application.

Currently supported AI services:

LangChain is short for Language Chain. An LLM, or Large Language Model, is the "Language" part. This library makes it easier for Elixir applications to "chain" or connect different processes, integrations, libraries, services, or functionality together with an LLM.

LangChain is a framework for developing applications powered by language models. It enables applications that are:

The main value props of LangChain are:

  1. Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
  2. Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.

What is this?

Large Language Models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.

This library is aimed at assisting in the development of those types of applications.

Documentation

The online documentation can be found here.

Demo

Check out the demo project that you can download and review.

Relationship with JavaScript and Python LangChain

This library is written in Elixir and intended to be used with Elixir applications. The original libraries are LangChain JS/TS and LangChain Python.

The JavaScript and Python projects aim to integrate with each other as seamlessly as possible. The intended integration is so strong that that all objects (prompts, LLMs, chains, etc) are designed in a way where they can be serialized and shared between the two languages.

This Elixir version does not aim for parity with the JavaScript and Python libraries. Why not?

This library was heavily inspired by, and based on, the way the JavaScript library actually worked and interacted with an LLM.

Installation

The package can be installed by adding langchain to your list of dependencies in mix.exs:

def deps do
  [
    {:langchain, "0.2.0"}
  ]
end

The Release Candidate includes many additional features and some breaking changes.

def deps do
  [
    {:langchain, "0.3.0-rc.0"}
  ]
end

Configuration

Currently, the library is written to use the Req library for making API calls.

You can configure an organization ID, and API key for OpenAI's API, but this library also works with other compatible APIs as well as other services and even local models running on Bumblebee.

config/runtime.exs:

config :langchain, openai_key: System.fetch_env!("OPENAI_API_KEY")
config :langchain, openai_org_id: System.fetch_env!("OPENAI_ORG_ID")
# OR
config :langchain, openai_key: "YOUR SECRET KEY"
config :langchain, openai_org_id: "YOUR_OPENAI_ORG_ID"

config :langchain, :anthropic_key, System.fetch_env!("ANTHROPIC_API_KEY")

It's possible to use a function or a tuple to resolve the secret:

config :langchain, openai_key: {MyApp.Secrets, :openai_api_key, []}
config :langchain, openai_org_id: {MyApp.Secrets, :openai_org_id, []}
# OR
config :langchain, openai_key: fn -> System.fetch_env!("OPENAI_API_KEY") end
config :langchain, openai_org_id: fn -> System.fetch_env!("OPENAI_ORG_ID") end

The API keys should be treated as secrets and not checked into your repository.

For fly.io, adding the secrets looks like this:

fly secrets set OPENAI_API_KEY=MyOpenAIApiKey
fly secrets set ANTHROPIC_API_KEY=MyAnthropicApiKey

A list of models to use:

Usage

The central module in this library is LangChain.Chains.LLMChain. Most other pieces are either inputs to this, or structures used by it. For understanding how to use the library, start there.

Exposing a custom Elixir function to ChatGPT

A really powerful feature of LangChain is making it easy to integrate an LLM into your application and expose features, data, and functionality from your application to the LLM.

<img src="https://github.com/brainlid/langchain/blob/main/images/langchain_functions_overview_sm_v1.png?raw=true" style="text-align: center;" width=50% height=50% alt="Diagram showing LLM integration to application logic and data through a LangChain.Function">

A LangChain.Function bridges the gap between the LLM and our application code. We choose what to expose and using context, we can ensure any actions are limited to what the user has permission to do and access.

For an interactive example, refer to the project Livebook notebook "LangChain: Executing Custom Elixir Functions".

The following is an example of a function that receives parameter arguments.

alias LangChain.Function
alias LangChain.Message
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
alias LangChain.Utils.ChainResult

# map of data we want to be passed as `context` to the function when
# executed.
custom_context = %{
  "user_id" => 123,
  "hairbrush" => "drawer",
  "dog" => "backyard",
  "sandwich" => "kitchen"
}

# a custom Elixir function made available to the LLM
custom_fn =
  Function.new!(%{
    name: "custom",
    description: "Returns the location of the requested element or item.",
    parameters_schema: %{
      type: "object",
      properties: %{
        thing: %{
          type: "string",
          description: "The thing whose location is being requested."
        }
      },
      required: ["thing"]
    },
    function: fn %{"thing" => thing} = _arguments, context ->
      # our context is a pretend item/location location map
      {:ok, context[thing]}
    end
  })

# create and run the chain
{:ok, updated_chain}} =
  LLMChain.new!(%{
    llm: ChatOpenAI.new!(),
    custom_context: custom_context,
    verbose: true
  })
  |> LLMChain.add_tools(custom_fn)
  |> LLMChain.add_message(Message.new_user!("Where is the hairbrush located?"))
  |> LLMChain.run(mode: :while_needs_response)

# print the LLM's answer
IO.puts(update |> ChainResult.to_string())
# => "The hairbrush is located in the drawer."

Alternative OpenAI compatible APIs

There are several services or self-hosted applications that provide an OpenAI compatible API for ChatGPT-like behavior. To use a service like that, the endpoint of the ChatOpenAI struct can be pointed to an API compatible endpoint for chats.

For example, if a locally running service provided that feature, the following code could connect to the service:

{:ok, updated_chain} =
  LLMChain.new!(%{
    llm: ChatOpenAI.new!(%{endpoint: "http://localhost:1234/v1/chat/completions"}),
  })
  |> LLMChain.add_message(Message.new_user!("Hello!"))
  |> LLMChain.run()

Bumblebee Chat Support

Bumblebee hosted chat models are supported. There is built-in support for Llama 2, Mistral, and Zephyr models.

Currently, function calling is NOT supported with these models.

ChatBumblebee.new!(%{
  serving: @serving_name,
  template_format: @template_format,
  receive_timeout: @receive_timeout,
  stream: true
})

The serving is the module name of the Nx.Serving that is hosting the model.

See the LangChain.ChatModels.ChatBumblebee documentation for more details.

Testing

To run all the tests including the ones that perform live calls against the OpenAI API, use the following command:

mix test --include live_call
mix test --include live_open_ai
mix test --include live_ollama_ai
mix test --include live_anthropic
mix test test/tools/calculator_test.exs --include live_call

NOTE: This will use the configured API credentials which creates billable events.

Otherwise, running the following will only run local tests making no external API calls:

mix test

Executing a specific test, whether it is a live_call or not, will execute it creating a potentially billable event.

When doing local development on the LangChain library itself, rename the .envrc_template to .envrc and populate it with your private API values. This is only used when running live test when explicitly requested.

Use a tool like Direnv or Dotenv to load the API values into the ENV when using the library locally.