Home

Awesome

🦜️🔗 Chat LangChain.js

This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, and Next.js.

Deployed version: chatjs.langchain.com

Looking for the Python version? Click here

✅ Local development

  1. Install dependencies via: yarn install.
  2. Set the required environment variables listed inside backend/.env.example for the backend, and frontend/.env.example for the frontend.

Ingest

  1. Build the backend via yarn build --filter=backend (from root).
  2. Run the ingestion script by navigating into ./backend and running yarn ingest.

Frontend

  1. Navigate into ./frontend and run yarn dev to start the frontend.
  2. Open localhost:3000 in your browser.

📚 Technical description

There are two components: ingestion and question-answering.

Ingestion has the following steps:

  1. Pull html from documentation site as well as the Github Codebase
  2. Load html with LangChain's RecursiveUrlLoader and SitemapLoader
  3. Split documents with LangChain's RecursiveCharacterTextSplitter
  4. Create a vectorstore of embeddings, using LangChain's Weaviate vectorstore wrapper (with OpenAI's embeddings).

Question-Answering has the following steps:

  1. Given the chat history and new user input, determine what a standalone question would be using GPT-3.5.
  2. Given that standalone question, look up relevant documents from the vectorstore.
  3. Pass the standalone question and relevant documents to the model to generate and stream the final answer.
  4. Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.

Documentation

Looking to use or modify this Use Case Accelerant for your own needs? We've added a few docs to aid with this: