Awesome
GPT-4 & LangChain - Create a ChatGPT Chatbot for Your PDF Files
Use the new GPT-4 api to build a chatGPT chatbot for multiple Large PDF files.
Tech stack used includes LangChain, Pinecone, Typescript, Openai, and Next.js. LangChain is a framework that makes it easier to build scalable AI/LLM apps and chatbots. Pinecone is a vectorstore for storing embeddings and your PDF in text to later retrieve similar docs.
Join the discord if you have questions
The visual guide of this repo and tutorial is in the visual guide
folder.
If you run into errors, please review the troubleshooting section further down this page.
Prelude: Please make sure you have already downloaded node on your system and the version is 18 or greater.
Development
- Clone the repo or download the ZIP
git clone [github https url]
- Install packages
First run npm install yarn -g
to install yarn globally (if you haven't already).
Then run:
yarn install
After installation, you should now see a node_modules
folder.
- Set up your
.env
file
- Copy
.env.example
into.env
Your.env
file should look like this:
OPENAI_API_KEY=
PINECONE_API_KEY=
PINECONE_ENVIRONMENT=
PINECONE_INDEX_NAME=
- Visit openai to retrieve API keys and insert into your
.env
file. - Visit pinecone to create and retrieve your API keys, and also retrieve your environment and index name from the dashboard.
-
In the
config
folder, replace thePINECONE_NAME_SPACE
with anamespace
where you'd like to store your embeddings on Pinecone when you runnpm run ingest
. This namespace will later be used for queries and retrieval. -
In
utils/makechain.ts
chain change theQA_PROMPT
for your own usecase. ChangemodelName
innew OpenAI
togpt-4
, if you have access togpt-4
api. Please verify outside this repo that you have access togpt-4
api, otherwise the application will not work.
Convert your PDF files to embeddings
This repo can load multiple PDF files
-
Inside
docs
folder, add your pdf files or folders that contain pdf files. -
Run the script
yarn run ingest
to 'ingest' and embed your docs. If you run into errors troubleshoot below. -
Check Pinecone dashboard to verify your namespace and vectors have been added.
Run the app
Once you've verified that the embeddings and content have been successfully added to your Pinecone, you can run the app npm run dev
to launch the local dev environment, and then type a question in the chat interface.
Troubleshooting
In general, keep an eye out in the issues
and discussions
section of this repo for solutions.
General errors
- Make sure you're running the latest Node version. Run
node -v
- Try a different PDF or convert your PDF to text first. It's possible your PDF is corrupted, scanned, or requires OCR to convert to text.
Console.log
theenv
variables and make sure they are exposed.- Make sure you're using the same versions of LangChain and Pinecone as this repo.
- Check that you've created an
.env
file that contains your valid (and working) API keys, environment and index name. - If you change
modelName
inOpenAI
, make sure you have access to the api for the appropriate model. - Make sure you have enough OpenAI credits and a valid card on your billings account.
- Check that you don't have multiple OPENAPI keys in your global environment. If you do, the local
env
file from the project will be overwritten by systemsenv
variable. - Try to hard code your API keys into the
process.env
variables if there are still issues.
Pinecone errors
- Make sure your pinecone dashboard
environment
andindex
matches the one in thepinecone.ts
and.env
files. - Check that you've set the vector dimensions to
1536
. - Make sure your pinecone namespace is in lowercase.
- Pinecone indexes of users on the Starter(free) plan are deleted after 7 days of inactivity. To prevent this, send an API request to Pinecone to reset the counter before 7 days.
- Retry from scratch with a new Pinecone project, index, and cloned repo.
Credit
Frontend of this repo is inspired by langchain-chat-nextjs