Awesome
💎🔗 Langchain.rb for Rails
The fastest way to sprinkle AI ✨ on top of your Rails app. Add OpenAI-powered question-and-answering in minutes.
Available for paid consulting engagements! Email me.
Dependencies
- Ruby 3.0+
- Postgres 11+
Table of Contents
Installation
Install the gem and add to the application's Gemfile by executing:
bundle add langchainrb_rails
If bundler is not being used to manage dependencies, install the gem by executing:
gem install langchainrb_rails
Configuration w/ Pgvector (requires Postgres 11+)
- Run the Rails generator to add vectorsearch to your ActiveRecord model
rails generate langchainrb_rails:pgvector --model=Product --llm=openai
This adds required dependencies to your Gemfile, creates the config/initializers/langchainrb_rails.rb
initializer file, database migrations, and adds the necessary code to the ActiveRecord model to enable vectorsearch.
- Bundle and migrate
bundle install && rails db:migrate
- Set the env var
OPENAI_API_KEY
to your OpenAI API key: https://platform.openai.com/account/api-keys
ENV["OPENAI_API_KEY"]=
- Generate embeddings for your model
Product.embed!
This can take a while depending on the number of database records.
Usage
Question and Answering
Product.ask("list the brands of shoes that are in stock")
Returns a String
with a natural language answer. The answer is assembled using the following steps:
- An embedding is generated for the passed in
question
using the selected LLM. - We calculate a cosine similarity to find records that most closely match your question's embedding.
- A prompt is created using the question and the above records (their
#as_vector
representation )are added as context. - This prompt is passed to the LLM to generate an answer
Similarity Search
Product.similarity_search("t-shirt")
Returns ActiveRecord relation that most closely matches the query
using vector search.
Customization
Changing the vector representation of a record
By default, embeddings are generated by calling the following method on your model instance:
to_json(except: :embedding)
You can override this by defining an #as_vector
method in your model:
def as_vector
{ name: name, description: description, category: category.name, ... }.to_json
end
Re-generate embeddings after modifying this method:
Product.embed!
Rails Generators
Pgvector Generator
rails generate langchainrb_rails:pgvector --model=Product --llm=openai
Pinecone Generator - adds vectorsearch to your ActiveRecord model
rails generate langchainrb_rails:pinecone --model=Product --llm=openai
Qdrant Generator - adds vectorsearch to your ActiveRecord model
rails generate langchainrb_rails:qdrant --model=Product --llm=openai
Available --llm
options: cohere
, google_palm
, hugging_face
, llama_cpp
, ollama
, openai
, and replicate
. The selected LLM will be used to generate embeddings and completions.
The --model
option is used to specify which ActiveRecord model vectorsearch capabilities will be added to.
Pinecone Generator does the following:
- Creates the
config/initializers/langchainrb_rails.rb
initializer file - Adds necessary code to the ActiveRecord model to enable vectorsearch
- Adds
pinecone
gem to the Gemfile
Prompt Generator - adds prompt templating capabilities to your ActiveRecord model
rails generate langchainrb_rails:prompt
This generator adds the following files to your Rails project:
- An ActiveRecord
Prompt
model atapp/models/prompt.rb
- A rails migration to create the
prompts
table
You can then use the Prompt
model to create and manage prompts for your model.
Example usage:
prompt = Prompt.create!(template: "Tell me a {adjective} joke about {subject}.")
prompt.render(adjective: "funny", subject: "elephants")
# => "Tell me a funny joke about elephants."
Assistant Generator - adds Langchain::Assistant capabilities to your Rails app
This generator adds Langchain::Assistant-related ActiveRecord models, migrations, controllers, views and route to your Rails app. You can start creating assistants and chatting with them in immediately.
rails generate langchainrb_rails:assistant --llm=openai
Available --llm
options: anthropic
, cohere
, google_palm
, google_gemini
, google_vertex_ai
, hugging_face
, llama_cpp
, mistral_ai
, ollama
, openai
, and replicate
. The selected LLM will be used to generate completions.
To remove the generated files, run:
rails destroy langchainrb_rails:assistant