Home

Awesome

LocalGPT: Secure, Local Conversations with Your Documents 🌐

<p align="center"> <a href="https://trendshift.io/repositories/2947" target="_blank"><img src="https://trendshift.io/api/badge/repositories/2947" alt="PromtEngineer%2FlocalGPT | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a> </p>

GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests License

🚨🚨 You can run localGPT on a pre-configured Virtual Machine. Make sure to use the code: PromptEngineering to get 50% off. I will get a small commision!

LocalGPT is an open-source initiative that allows you to converse with your documents without compromising your privacy. With everything running locally, you can be assured that no data ever leaves your computer. Dive into the world of secure, local document interactions with LocalGPT.

Features 🌟

Dive Deeper with Our Videos 🎥

Technical Details 🛠️

By selecting the right local models and the power of LangChain you can run the entire RAG pipeline locally, without any data leaving your environment, and with reasonable performance.

This project was inspired by the original privateGPT.

Built Using 🧩

Environment Setup 🌍

  1. 📥 Clone the repo using git:
git clone https://github.com/PromtEngineer/localGPT.git
  1. 🐍 Install conda for virtual environment management. Create and activate a new virtual environment.
conda create -n localGPT python=3.10.0
conda activate localGPT
  1. 🛠️ Install the dependencies using pip

To set up your environment to run the code, first install all requirements:

pip install -r requirements.txt

Installing LLAMA-CPP :

LocalGPT uses LlamaCpp-Python for GGML (you will need llama-cpp-python <=0.1.76) and GGUF (llama-cpp-python >=0.1.83) models.

To run the quantized Llama3 model, ensure you have llama-cpp-python version 0.2.62 or higher installed.

If you want to use BLAS or Metal with llama-cpp you can set appropriate flags:

For NVIDIA GPUs support, use cuBLAS

# Example: cuBLAS
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

For Apple Metal (M1/M2) support, use

# Example: METAL
CMAKE_ARGS="-DLLAMA_METAL=on"  FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir

For more details, please refer to llama-cpp

Docker 🐳

Installing the required packages for GPU inference on NVIDIA GPUs, like gcc 11 and CUDA 11, may cause conflicts with other packages in your system. As an alternative to Conda, you can use Docker with the provided Dockerfile. It includes CUDA, your system just needs Docker, BuildKit, your NVIDIA GPU driver and the NVIDIA container toolkit. Build as docker build -t localgpt ., requires BuildKit. Docker BuildKit does not support GPU during docker build time right now, only during docker run. Run as docker run -it --mount src="$HOME/.cache",target=/root/.cache,type=bind --gpus=all localgpt. For running the code on Intel® Gaudi® HPU, use the following Dockerfile - Dockerfile_hpu.

Test dataset

For testing, this repository comes with Constitution of USA as an example file to use.

Ingesting your OWN Data.

Put your files in the SOURCE_DOCUMENTS folder. You can put multiple folders within the SOURCE_DOCUMENTS folder and the code will recursively read your files.

Support file formats:

LocalGPT currently supports the following file formats. LocalGPT uses LangChain for loading these file formats. The code in constants.py uses a DOCUMENT_MAP dictionary to map a file format to the corresponding loader. In order to add support for another file format, simply add this dictionary with the file format and the corresponding loader from LangChain.

DOCUMENT_MAP = {
    ".txt": TextLoader,
    ".md": TextLoader,
    ".py": TextLoader,
    ".pdf": PDFMinerLoader,
    ".csv": CSVLoader,
    ".xls": UnstructuredExcelLoader,
    ".xlsx": UnstructuredExcelLoader,
    ".docx": Docx2txtLoader,
    ".doc": Docx2txtLoader,
}

Ingest

Run the following command to ingest all the data.

If you have cuda setup on your system.

python ingest.py

You will see an output like this: <img width="1110" alt="Screenshot 2023-09-14 at 3 36 27 PM" src="https://github.com/PromtEngineer/localGPT/assets/134474669/c9274e9a-842c-49b9-8d95-606c3d80011f">

Use the device type argument to specify a given device. To run on cpu

python ingest.py --device_type cpu

To run on M1/M2

python ingest.py --device_type mps

Use help for a full list of supported devices.

python ingest.py --help

This will create a new folder called DB and use it for the newly created vector store. You can ingest as many documents as you want, and all will be accumulated in the local embeddings database. If you want to start from an empty database, delete the DB and reingest your documents.

Note: When you run this for the first time, it will need internet access to download the embedding model (default: Instructor Embedding). In the subsequent runs, no data will leave your local environment and you can ingest data without internet connection.

Ask questions to your documents, locally!

In order to chat with your documents, run the following command (by default, it will run on cuda).

python run_localGPT.py

You can also specify the device type just like ingest.py

python run_localGPT.py --device_type mps # to run on Apple silicon
# To run on Intel® Gaudi® hpu
MODEL_ID = "mistralai/Mistral-7B-Instruct-v0.2" # in constants.py
python run_localGPT.py --device_type hpu

This will load the ingested vector store and embedding model. You will be presented with a prompt:

> Enter a query:

After typing your question, hit enter. LocalGPT will take some time based on your hardware. You will get a response like this below. <img width="1312" alt="Screenshot 2023-09-14 at 3 33 19 PM" src="https://github.com/PromtEngineer/localGPT/assets/134474669/a7268de9-ade0-420b-a00b-ed12207dbe41">

Once the answer is generated, you can then ask another question without re-running the script, just wait for the prompt again.

Note: When you run this for the first time, it will need internet connection to download the LLM (default: TheBloke/Llama-2-7b-Chat-GGUF). After that you can turn off your internet connection, and the script inference would still work. No data gets out of your local environment.

Type exit to finish the script.

Extra Options with run_localGPT.py

You can use the --show_sources flag with run_localGPT.py to show which chunks were retrieved by the embedding model. By default, it will show 4 different sources/chunks. You can change the number of sources/chunks

python run_localGPT.py --show_sources

Another option is to enable chat history. Note: This is disabled by default and can be enabled by using the --use_history flag. The context window is limited so keep in mind enabling history will use it and might overflow.

python run_localGPT.py --use_history

You can store user questions and model responses with flag --save_qa into a csv file /local_chat_history/qa_log.csv. Every interaction will be stored.

python run_localGPT.py --save_qa

Run the Graphical User Interface

  1. Open constants.py in an editor of your choice and depending on choice add the LLM you want to use. By default, the following model will be used:

    MODEL_ID = "TheBloke/Llama-2-7b-Chat-GGUF"
    MODEL_BASENAME = "llama-2-7b-chat.Q4_K_M.gguf"
    
  2. Open up a terminal and activate your python environment that contains the dependencies installed from requirements.txt.

  3. Navigate to the /LOCALGPT directory.

  4. Run the following command python run_localGPT_API.py. The API should being to run.

  5. Wait until everything has loaded in. You should see something like INFO:werkzeug:Press CTRL+C to quit.

  6. Open up a second terminal and activate the same python environment.

  7. Navigate to the /LOCALGPT/localGPTUI directory.

  8. Run the command python localGPTUI.py.

  9. Open up a web browser and go the address http://localhost:5111/.

How to select different LLM models?

To change the models you will need to set both MODEL_ID and MODEL_BASENAME.

  1. Open up constants.py in the editor of your choice.

  2. Change the MODEL_ID and MODEL_BASENAME. If you are using a quantized model (GGML, GPTQ, GGUF), you will need to provide MODEL_BASENAME. For unquantized models, set MODEL_BASENAME to NONE

  3. There are a number of example models from HuggingFace that have already been tested to be run with the original trained model (ending with HF or have a .bin in its "Files and versions"), and quantized models (ending with GPTQ or have a .no-act-order or .safetensors in its "Files and versions").

  4. For models that end with HF or have a .bin inside its "Files and versions" on its HuggingFace page.

    • Make sure you have a MODEL_ID selected. For example -> MODEL_ID = "TheBloke/guanaco-7B-HF"
    • Go to the HuggingFace Repo
  5. For models that contain GPTQ in its name and or have a .no-act-order or .safetensors extension inside its "Files and versions on its HuggingFace page.

    • Make sure you have a MODEL_ID selected. For example -> model_id = "TheBloke/wizardLM-7B-GPTQ"
    • Got to the corresponding HuggingFace Repo and select "Files and versions".
    • Pick one of the model names and set it as MODEL_BASENAME. For example -> MODEL_BASENAME = "wizardLM-7B-GPTQ-4bit.compat.no-act-order.safetensors"
  6. Follow the same steps for GGUF and GGML models.

GPU and VRAM Requirements

Below is the VRAM requirement for different models depending on their size (Billions of parameters). The estimates in the table does not include VRAM used by the Embedding models - which use an additional 2GB-7GB of VRAM depending on the model.

Mode Size (B)float32float16GPTQ 8bitGPTQ 4bit
7B28 GB14 GB7 GB - 9 GB3.5 GB - 5 GB
13B52 GB26 GB13 GB - 15 GB6.5 GB - 8 GB
32B130 GB65 GB32.5 GB - 35 GB16.25 GB - 19 GB
65B260.8 GB130.4 GB65.2 GB - 67 GB32.6 GB - 35 GB

System Requirements

Python Version

To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.

C++ Compiler

If you encounter an error while building a wheel during the pip install process, you may need to install a C++ compiler on your computer.

For Windows 10/11

To install a C++ compiler on Windows 10/11, follow these steps:

  1. Install Visual Studio 2022.
  2. Make sure the following components are selected:
    • Universal Windows Platform development
    • C++ CMake tools for Windows
  3. Download the MinGW installer from the MinGW website.
  4. Run the installer and select the "gcc" component.

NVIDIA Driver's Issues:

Follow this page to install NVIDIA Drivers.

Star History

Star History Chart

Disclaimer

This is a test project to validate the feasibility of a fully local solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. Vicuna-7B is based on the Llama model so that has the original Llama license.

Common Errors