Home

Awesome

Open in Dev Containers Open in GitHub Codespaces

šŸ„¤ RAGLite

RAGLite is a Python toolkit for Retrieval-Augmented Generation (RAG) with PostgreSQL or SQLite.

Features

Configurable
Fast and permissive
Unhobbled
Extensible

Installing

First, begin by installing spaCy's multilingual sentence model:

# Install spaCy's xx_sent_ud_sm:
pip install https://github.com/explosion/spacy-models/releases/download/xx_sent_ud_sm-3.7.0/xx_sent_ud_sm-3.7.0-py3-none-any.whl

Next, it is optional but recommended to install an accelerated llama-cpp-python precompiled binary with:

# Configure which llama-cpp-python precompiled binary to install (āš ļø On macOS only v0.3.2 is supported right now):
LLAMA_CPP_PYTHON_VERSION=0.3.2
PYTHON_VERSION=310
ACCELERATOR=metal|cu121|cu122|cu123|cu124
PLATFORM=macosx_11_0_arm64|linux_x86_64|win_amd64

# Install llama-cpp-python:
pip install "https://github.com/abetlen/llama-cpp-python/releases/download/v$LLAMA_CPP_PYTHON_VERSION-$ACCELERATOR/llama_cpp_python-$LLAMA_CPP_PYTHON_VERSION-cp$PYTHON_VERSION-cp$PYTHON_VERSION-$PLATFORM.whl"

Finally, install RAGLite with:

pip install raglite

To add support for a customizable ChatGPT-like frontend, use the chainlit extra:

pip install raglite[chainlit]

To add support for filetypes other than PDF, use the pandoc extra:

pip install raglite[pandoc]

To add support for evaluation, use the ragas extra:

pip install raglite[ragas]

Using

Overview

  1. Configuring RAGLite
  2. Inserting documents
  3. Retrieval-Augmented Generation (RAG)
  4. Computing and using an optimal query adapter
  5. Evaluation of retrieval and generation
  6. Running a Model Context Protocol (MCP) server
  7. Serving a customizable ChatGPT-like frontend

1. Configuring RAGLite

[!TIP] šŸ§  RAGLite extends LiteLLM with support for llama.cpp models using llama-cpp-python. To select a llama.cpp model (e.g., from bartowski's collection), use a model identifier of the form "llama-cpp-python/<hugging_face_repo_id>/<filename>@<n_ctx>", where n_ctx is an optional parameter that specifies the context size of the model.

[!TIP] šŸ’¾ You can create a PostgreSQL database in a few clicks at neon.tech.

First, configure RAGLite with your preferred PostgreSQL or SQLite database and any LLM supported by LiteLLM:

from raglite import RAGLiteConfig

# Example 'remote' config with a PostgreSQL database and an OpenAI LLM:
my_config = RAGLiteConfig(
    db_url="postgresql://my_username:my_password@my_host:5432/my_database"
    llm="gpt-4o-mini",  # Or any LLM supported by LiteLLM.
    embedder="text-embedding-3-large",  # Or any embedder supported by LiteLLM.
)

# Example 'local' config with a SQLite database and a llama.cpp LLM:
my_config = RAGLiteConfig(
    db_url="sqlite:///raglite.db",
    llm="llama-cpp-python/bartowski/Meta-Llama-3.1-8B-Instruct-GGUF/*Q4_K_M.gguf@8192",
    embedder="llama-cpp-python/lm-kit/bge-m3-gguf/*F16.gguf@1024",  # A context size of 1024 tokens is the sweet spot for bge-m3.
)

You can also configure any reranker supported by rerankers:

from rerankers import Reranker

# Example remote API-based reranker:
my_config = RAGLiteConfig(
    db_url="postgresql://my_username:my_password@my_host:5432/my_database"
    reranker=Reranker("cohere", lang="en", api_key=COHERE_API_KEY)
)

# Example local cross-encoder reranker per language (this is the default):
my_config = RAGLiteConfig(
    db_url="sqlite:///raglite.db",
    reranker=(
        ("en", Reranker("ms-marco-MiniLM-L-12-v2", model_type="flashrank")),  # English
        ("other", Reranker("ms-marco-MultiBERT-L-12", model_type="flashrank")),  # Other languages
    )
)

2. Inserting documents

[!TIP] āœļø To insert documents other than PDF, install the pandoc extra with pip install raglite[pandoc].

Next, insert some documents into the database. RAGLite will take care of the conversion to Markdown, optimal level 4 semantic chunking, and multi-vector embedding with late chunking:

# Insert documents:
from pathlib import Path
from raglite import insert_document

insert_document(Path("On the Measure of Intelligence.pdf"), config=my_config)
insert_document(Path("Special Relativity.pdf"), config=my_config)

3. Retrieval-Augmented Generation (RAG)

3.1 Adaptive RAG

Now you can run an adaptive RAG pipeline that consists of adding the user prompt to the message history and streaming the LLM response:

from raglite import rag

# Create a user message:
messages = []  # Or start with an existing message history.
messages.append({
    "role": "user",
    "content": "How is intelligence measured?"
})

# Adaptively decide whether to retrieve and then stream the response:
chunk_spans = []
stream = rag(messages, on_retrieval=lambda x: chunk_spans.extend(x), config=my_config)
for update in stream:
    print(update, end="")

# Access the documents referenced in the RAG context:
documents = [chunk_span.document for chunk_span in chunk_spans]

The LLM will adaptively decide whether to retrieve information based on the complexity of the user prompt. If retrieval is necessary, the LLM generates the search query and RAGLite applies hybrid search and reranking to retrieve the most relevant chunk spans (each of which is a list of consecutive chunks). The retrieval results are sent to the on_retrieval callback and are appended to the message history as a tool output. Finally, the assistant response is streamed and appended to the message history.

3.2 Programmable RAG

If you need manual control over the RAG pipeline, you can run a basic but powerful pipeline that consists of retrieving the most relevant chunk spans with hybrid search and reranking, converting the user prompt to a RAG instruction and appending it to the message history, and finally generating the RAG response:

from raglite import create_rag_instruction, rag, retrieve_rag_context

# Retrieve relevant chunk spans with hybrid search and reranking:
user_prompt = "How is intelligence measured?"
chunk_spans = retrieve_rag_context(query=user_prompt, num_chunks=5, config=my_config)

# Append a RAG instruction based on the user prompt and context to the message history:
messages = []  # Or start with an existing message history.
messages.append(create_rag_instruction(user_prompt=user_prompt, context=chunk_spans))

# Stream the RAG response and append it to the message history:
stream = rag(messages, config=my_config)
for update in stream:
    print(update, end="")

# Access the documents referenced in the RAG context:
documents = [chunk_span.document for chunk_span in chunk_spans]

[!TIP] šŸ„‡ Reranking can significantly improve the output quality of a RAG application. To add reranking to your application: first search for a larger set of 20 relevant chunks, then rerank them with a rerankers reranker, and finally keep the top 5 chunks.

RAGLite also offers more advanced control over the individual steps of a full RAG pipeline:

  1. Searching for relevant chunks with keyword, vector, or hybrid search
  2. Retrieving the chunks from the database
  3. Reranking the chunks and selecting the top 5 results
  4. Extending the chunks with their neighbors and grouping them into chunk spans
  5. Converting the user prompt to a RAG instruction and appending it to the message history
  6. Streaming an LLM response to the message history
  7. Accessing the cited documents from the chunk spans

A full RAG pipeline is straightforward to implement with RAGLite:

# Search for chunks:
from raglite import hybrid_search, keyword_search, vector_search

user_prompt = "How is intelligence measured?"
chunk_ids_vector, _ = vector_search(user_prompt, num_results=20, config=my_config)
chunk_ids_keyword, _ = keyword_search(user_prompt, num_results=20, config=my_config)
chunk_ids_hybrid, _ = hybrid_search(user_prompt, num_results=20, config=my_config)

# Retrieve chunks:
from raglite import retrieve_chunks

chunks_hybrid = retrieve_chunks(chunk_ids_hybrid, config=my_config)

# Rerank chunks and keep the top 5 (optional, but recommended):
from raglite import rerank_chunks

chunks_reranked = rerank_chunks(user_prompt, chunks_hybrid, config=my_config)
chunks_reranked = chunks_reranked[:5]

# Extend chunks with their neighbors and group them into chunk spans:
from raglite import retrieve_chunk_spans

chunk_spans = retrieve_chunk_spans(chunks_reranked, config=my_config)

# Append a RAG instruction based on the user prompt and context to the message history:
from raglite import create_rag_instruction

messages = []  # Or start with an existing message history.
messages.append(create_rag_instruction(user_prompt=user_prompt, context=chunk_spans))

# Stream the RAG response and append it to the message history:
from raglite import rag

stream = rag(messages, config=my_config)
for update in stream:
    print(update, end="")

# Access the documents referenced in the RAG context:
documents = [chunk_span.document for chunk_span in chunk_spans]

4. Computing and using an optimal query adapter

RAGLite can compute and apply an optimal closed-form query adapter to the prompt embedding to improve the output quality of RAG. To benefit from this, first generate a set of evals with insert_evals and then compute and store the optimal query adapter with update_query_adapter:

# Improve RAG with an optimal query adapter:
from raglite import insert_evals, update_query_adapter

insert_evals(num_evals=100, config=my_config)
update_query_adapter(config=my_config)  # From here, every vector search will use the query adapter.

5. Evaluation of retrieval and generation

If you installed the ragas extra, you can use RAGLite to answer the evals and then evaluate the quality of both the retrieval and generation steps of RAG using Ragas:

# Evaluate retrieval and generation:
from raglite import answer_evals, evaluate, insert_evals

insert_evals(num_evals=100, config=my_config)
answered_evals_df = answer_evals(num_evals=10, config=my_config)
evaluation_df = evaluate(answered_evals_df, config=my_config)

6. Running a Model Context Protocol (MCP) server

RAGLite comes with an MCP server implemented with FastMCP that exposes a search_knowledge_base tool. To use the server:

  1. Install Claude desktop
  2. Install uv so that Claude desktop can start the server
  3. Configure Claude desktop to use uv to start the MCP server with:
raglite \
    --db-url sqlite:///raglite.db \
    --llm llama-cpp-python/bartowski/Llama-3.2-3B-Instruct-GGUF/*Q4_K_M.gguf@4096 \
    --embedder llama-cpp-python/lm-kit/bge-m3-gguf/*F16.gguf@1024 \
    mcp install

To use an API-based LLM, make sure to include your credentials in a .env file or supply them inline:

OPENAI_API_KEY=sk-... raglite --llm gpt-4o-mini --embedder text-embedding-3-large mcp install

Now, when you start Claude desktop you should see a šŸ”Ø icon at the bottom right of your prompt indicating that the Claude has successfully connected with the MCP server.

When relevant, Claude will suggest to use the search_knowledge_base tool that the MCP server provides. You can also explicitly ask Claude to search the knowledge base if you want to be certain that it does.

<div align="center"><video src="https://github.com/user-attachments/assets/3a597a17-874e-475f-a6dd-cd3ccf360fb9" /></div>

7. Serving a customizable ChatGPT-like frontend

If you installed the chainlit extra, you can serve a customizable ChatGPT-like frontend with:

raglite chainlit

The application is also deployable to web, Slack, and Teams.

You can specify the database URL, LLM, and embedder directly in the Chainlit frontend, or with the CLI as follows:

raglite \
    --db-url sqlite:///raglite.db \
    --llm llama-cpp-python/bartowski/Llama-3.2-3B-Instruct-GGUF/*Q4_K_M.gguf@4096 \
    --embedder llama-cpp-python/lm-kit/bge-m3-gguf/*F16.gguf@1024 \
    chainlit

To use an API-based LLM, make sure to include your credentials in a .env file or supply them inline:

OPENAI_API_KEY=sk-... raglite --llm gpt-4o-mini --embedder text-embedding-3-large chainlit
<div align="center"><video src="https://github.com/user-attachments/assets/a303ed4a-54cd-45ea-a2b5-86e086053aed" /></div>

Contributing

<details> <summary>Prerequisites</summary> <details> <summary>1. Set up Git to use SSH</summary>
  1. Generate an SSH key and add the SSH key to your GitHub account.
  2. Configure SSH to automatically load your SSH keys:
    cat << EOF >> ~/.ssh/config
    
    Host *
      AddKeysToAgent yes
      IgnoreUnknown UseKeychain
      UseKeychain yes
      ForwardAgent yes
    EOF
    
</details> <details> <summary>2. Install Docker</summary>
  1. Install Docker Desktop.
</details> <details> <summary>3. Install VS Code or PyCharm</summary>
  1. Install VS Code and VS Code's Dev Containers extension. Alternatively, install PyCharm.
  2. Optional: install a Nerd Font such as FiraCode Nerd Font and configure VS Code or configure PyCharm to use it.
</details> </details> <details open> <summary>Development environments</summary>

The following development environments are supported:

  1. ā­ļø GitHub Codespaces: click on Code and select Create codespace to start a Dev Container with GitHub Codespaces.
  2. ā­ļø Dev Container (with container volume): click on Open in Dev Containers to clone this repository in a container volume and create a Dev Container with VS Code.
  3. Dev Container: clone this repository, open it with VS Code, and run <kbd>Ctrl/āŒ˜</kbd> + <kbd>ā‡§</kbd> + <kbd>P</kbd> ā†’ Dev Containers: Reopen in Container.
  4. PyCharm: clone this repository, open it with PyCharm, and configure Docker Compose as a remote interpreter with the dev service.
  5. Terminal: clone this repository, open it with your terminal, and run docker compose up --detach dev to start a Dev Container in the background, and then run docker compose exec dev zsh to open a shell prompt in the Dev Container.
</details> <details> <summary>Developing</summary> </details>

Star History

<a href="https://star-history.com/#superlinear-ai/raglite&Timeline"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=superlinear-ai/raglite&type=Timeline&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=superlinear-ai/raglite&type=Timeline" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=superlinear-ai/raglite&type=Timeline" /> </picture> </a>

Footnotes

  1. We use PyNNDescent until sqlite-vec is more mature. ā†©