Home

Awesome

ChatDocs PyPI tests

Chat with your documents offline using AI. No data leaves your system. Internet connection is only required to install the tool and download the AI models. It is based on PrivateGPT but has more features.

Web UI

Contents

Features

<details> <summary><strong>Show supported document types</strong></summary><br>
ExtensionFormat
.csvCSV
.docx, .docWord Document
.enexEverNote
.emlEmail
.epubEPub
.htmlHTML
.mdMarkdown
.msgOutlook Message
.odtOpen Document Text
.pdfPortable Document Format (PDF)
.pptx, .pptPowerPoint Document
.txtText file (UTF-8)
</details>

Installation

Install the tool using:

pip install chatdocs

Download the AI models using:

chatdocs download

Now it can be run offline without internet connection.

Usage

Add a directory containing documents to chat with using:

chatdocs add /path/to/documents

The processed documents will be stored in db directory by default.

Chat with your documents using:

chatdocs ui

Open http://localhost:5000 in your browser to access the web UI.

It also has a nice command-line interface:

chatdocs chat
<details> <summary><strong>Show preview</strong></summary><br>

Demo

</details>

Configuration

All the configuration options can be changed using the chatdocs.yml config file. Create a chatdocs.yml file in some directory and run all commands from that directory. For reference, see the default chatdocs.yml file.

You don't have to copy the entire file, just add the config options you want to change as it will be merged with the default config. For example, see tests/fixtures/chatdocs.yml which changes only some of the config options.

Embeddings

To change the embeddings model, add and change the following in your chatdocs.yml:

embeddings:
  model: hkunlp/instructor-large

Note: When you change the embeddings model, delete the db directory and add documents again.

CTransformers

To change the CTransformers (GGML/GGUF) model, add and change the following in your chatdocs.yml:

ctransformers:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML
  model_file: Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin
  model_type: llama

Note: When you add a new model for the first time, run chatdocs download to download the model before using it.

You can also use an existing local model file:

ctransformers:
  model: /path/to/ggml-model.bin
  model_type: llama

🤗 Transformers

To use 🤗 Transformers models, add the following to your chatdocs.yml:

llm: huggingface

To change the 🤗 Transformers model, add and change the following in your chatdocs.yml:

huggingface:
  model: TheBloke/Wizard-Vicuna-7B-Uncensored-HF

Note: When you add a new model for the first time, run chatdocs download to download the model before using it.

To use GPTQ models with 🤗 Transformers, install the necessary packages using:

pip install chatdocs[gptq]

GPU

Embeddings

To enable GPU (CUDA) support for the embeddings model, add the following to your chatdocs.yml:

embeddings:
  model_kwargs:
    device: cuda

You may have to reinstall PyTorch with CUDA enabled by following the instructions here.

CTransformers

To enable GPU (CUDA) support for the CTransformers (GGML/GGUF) model, add the following to your chatdocs.yml:

ctransformers:
  config:
    gpu_layers: 50

You may have to install the CUDA libraries using:

pip install ctransformers[cuda]

🤗 Transformers

To enable GPU (CUDA) support for the 🤗 Transformers model, add the following to your chatdocs.yml:

huggingface:
  device: 0

You may have to reinstall PyTorch with CUDA enabled by following the instructions here.

License

MIT