Home

Awesome

🇬🇧English 🇨🇳简体中文


Python package workflow Pylint workflow License: MIT

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/logo.png" alt="AilingBot" width="300"> </p> <p align="center"><b>AilingBot - One-stop solution to empower your IM bot with AI.</b></p>

Table of Contents

What is AilingBot

AilingBot is an open-source engineering development framework and an all-in-one solution for integrating AI models into IM chatbots. With AilingBot, you can:

Features

🚀 Quick Start

Start an AI chatbot in 5 minutes

Below is a guide on how to quickly start an AI chatbot based on the command-line interface using AilingBot. The effect is shown in the following figure:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/command-line-screenshot.png" alt="Command-line chatbot"/> </p>

💡 First, you need to have an OpenAI API key. If you don't have one, refer to relevant materials on the Internet to obtain it.

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -it --rm \
  -e  AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  ailingbot poetry run ailingbot chat

Using PIP

Installation

pip install ailingbot

Generate Configuration File

ailingbot init --silence --overwrite

This will create a file called settings.toml in the current directory, which is the configuration file for AilingBot. Next, modify the necessary configurations. To start the bot, only one configuration is needed. Find the following section in settings.toml:

[policy.llm]
_type = "openai"
model_name = "gpt-3.5-turbo"
openai_api_key = ""
temperature = 0

Change the value of openai_api_key to your actual OpenAI API key.

Start the Chatbot

Start the chatbot with the following command:

ailingbot chat

Start API Service

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -it --rm \
  -e AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  -p 8080:8080 \
  ailingbot poetry run ailingbot api

Using PIP

Installation

pip install ailingbot

Generate Configuration File

Same as starting the command line bot.

Start the Service

Start the bot using the following command:

ailingbot api

Now, enter http://localhost:8080/docs in your browser to see the API documentation. (If it is not a local start, please enter http://{your public IP}:8080/docs)

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/swagger.png" alt="Swagger API Documentation"/> </p>

Here is an example request:

curl -X 'POST' \
  'http://localhost:8080/chat/' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "text": "你好"
}'

And the response:

{
  "type": "text",
  "conversation_id": "default_conversation",
  "uuid": "afb35218-2978-404a-ab39-72a9db6f303b",
  "ack_uuid": "3f09933c-e577-49a5-8f56-fa328daa136f",
  "receiver_id": "anonymous",
  "scope": "user",
  "meta": {},
  "echo": {},
  "text": "你好!很高兴和你聊天。有什么我可以帮助你的吗?",
  "reason": null,
  "suggestion": null
}

Integrating with WeChat Work

Here's a guide on how to quickly integrate the chatbot with WeChat Work.

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -d \
  -e AILINGBOT_POLICY__NAME=conversation \
  -e AILINGBOT_POLICY__HISTORY_SIZE=5 \
  -e AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  -e AILINGBOT_CHANNEL__NAME=wechatwork \
  -e AILINGBOT_CHANNEL__CORPID={your WeChat Work corpid} \
  -e AILINGBOT_CHANNEL__CORPSECRET={your WeChat Work corpsecret} \
  -e AILINGBOT_CHANNEL__AGENTID={your WeChat Work agentid} \
  -e AILINGBOT_CHANNEL__TOKEN={your WeChat Work webhook token} \
  -e AILINGBOT_CHANNEL__AES_KEY={your WeChat Work webhook aes_key} \
  -p 8080:8080 \
  ailingbot poetry run ailingbot serve

Using PIP

Installation

pip install ailingbot

Generate Configuration File

ailingbot init --silence --overwrite

Modify Configuration File

Open settings.toml, and fill in the following section with your WeChat Work robot's real information:

[channel]
name = "wechatwork"
corpid = "" # Fill in with real information
corpsecret = "" # Fill in with real information
agentid = 0 # Fill in with real information
token = "" # Fill in with real information
aes_key = "" # Fill in with real information

In the llm section, fill in your OpenAI API Key:

[policy.llm]
_type = "openai"
model_name = "gpt-3.5-turbo"
openai_api_key = "" # Fill in with your real OpenAI API Key here
temperature = 0

Start the Service

ailingbot serve

Finally, we need to go to the WeChat Work admin console to configure the webhook address so that WeChat Work knows to forward the received user messages to our webhook. The webhook URL is: http(s)://your_public_IP:8080/webhook/wechatwork/event/

After completing the above configuration, you can find the chatbot in WeChat Work and start chatting:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/wechatwork-screenshot.png" alt="WeChat Work chatbot" width="300"/> </p>

Integrating with Feishu

Here's a guide on how to quickly integrate the chatbot with Feishu and enable a new conversation policy: uploading documents and performing knowledge-based question answering on them.

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -d \
  -e AILINGBOT_POLICY__NAME=document_qa \
  -e AILINGBOT_POLICY__CHUNK_SIZE=1000 \
  -e AILINGBOT_POLICY__CHUNK_OVERLAP=0 \
  -e AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  -e AILINGBOT_POLICY__LLM__MODEL_NAME=gpt-3.5-turbo-16k \
  -e AILINGBOT_CHANNEL__NAME=feishu \
  -e AILINGBOT_CHANNEL__APP_ID={your Feishu app id} \
  -e AILINGBOT_CHANNEL__APP_SECRET={your Feishu app secret} \
  -e AILINGBOT_CHANNEL__VERIFICATION_TOKEN={your Feishu webhook verification token} \
  -p 8080:8080 \
  ailingbot poetry run ailingbot serve

Using PIP

Installation

pip install ailingbot

Generate Configuration File

ailingbot init --silence --overwrite

Modify Configuration File

Open settings.toml, and change the channel section to the following, filling in your Feishu robot's real information:

[channel]
name = "feishu"
app_id = "" # Fill in with real information
app_secret = "" # Fill in with real information
verification_token = "" # Fill in with real information

Replace the policy section with the following document QA policy:

[policy]
name = "document_qa"
chunk_size = 1000
chunk_overlap = 5

Finally, it is recommended to use the 16k model when using the document QA policy. Therefore, change policy.llm.model_name to the following configuration:

[policy.llm]
_type = "openai"
model_name = "gpt-3.5-turbo-16k" # Change to gpt-3.5-turbo-16k
openai_api_key = "" # Fill in with real information
temperature = 0

Start the Service

ailingbot serve

Finally, we need to go to the Feishu admin console to configure the webhook address. The webhook URL for Feishu is: http(s)://your_public_IP:8080/webhook/feishu/event/

After completing the above configuration, you can find the chatbot in Feishu and start chatting:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/feishu-screenshot.png" alt="Feishu chatbot" width="1000"/> </p>

Integrating with DingTalk

Here's a guide on how to quickly integrate the chatbot with DingTalk.

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -d \
  -e AILINGBOT_POLICY__NAME=conversation \
  -e AILINGBOT_POLICY__HISTORY_SIZE=5 \
  -e AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  -e AILINGBOT_CHANNEL__NAME=dingtalk \
  -e AILINGBOT_CHANNEL__APP_KEY={your DingTalk app key} \
  -e AILINGBOT_CHANNEL__APP_SECRET={your DingTalk app secret} \
  -e AILINGBOT_CHANNEL__ROBOT_CODE={your DingTalk robot code} \
  -p 8080:8080 \
  ailingbot poetry run ailingbot serve

Using PIP

Installation

pip install ailingbot

Generate Configuration File

ailingbot init --silence --overwrite

Modify Configuration File

Open settings.toml, and change the channel section to the following, filling in your DingTalk robot's real information:

[channel]
name = "dingtalk"
app_key = "" # Fill in with real information
app_secret = "" # Fill in with real information
robot_code = "" # Fill in with real information

Start the Service

ailingbot serve

Finally, we need to go to the DingTalk admin console to configure the webhook address. The webhook URL for DingTalk is: http(s)://your_public_IP:8080/webhook/dingtalk/event/

After completing the above configuration, you can find the chatbot in DingTalk and start chatting:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/dingtalk-screenshot.png" alt="DingTalk chatbot" /> </p>

Integrating with Slack

Here's a guide on how to quickly integrate the chatbot with Slack and enable a new conversation policy: uploading documents and performing knowledge-based question answering on them.

Using Docker

git clone https://github.com/ericzhang-cn/ailingbot.git ailingbot
cd ailingbot
docker build -t ailingbot .
docker run -d \
  -e AILINGBOT_POLICY__NAME=document_qa \
  -e AILINGBOT_POLICY__CHUNK_SIZE=1000 \
  -e AILINGBOT_POLICY__CHUNK_OVERLAP=0 \
  -e AILINGBOT_POLICY__LLM__OPENAI_API_KEY={your OpenAI API key} \
  -e AILINGBOT_POLICY__LLM__MODEL_NAME=gpt-3.5-turbo-16k \
  -e AILINGBOT_CHANNEL__NAME=slack \
  -e AILINGBOT_CHANNEL__VERIFICATION_TOKEN={your Slack App webhook verification token} \
  -e AILINGBOT_CHANNEL__OAUTH_TOKEN={your Slack App oauth token} \
  -p 8080:8080 \
  ailingbot poetry run ailingbot serve

Using PIP

Installation

pip install ailingbot

Generate Configuration File

ailingbot init --silence --overwrite

Modify Configuration File

Open settings.toml, and change the channel section to the following, filling in your Slack robot's real information:

[channel]
name = "slack"
verification_token = "" # Fill in with real information
oauth_token = "" # Fill in with real information

Replace the policy section with the following document QA policy:

[policy]
name = "document_qa"
chunk_size = 1000
chunk_overlap = 5

Finally, it is recommended to use the 16k model when using the document QA policy. Therefore, change policy.llm.model_name to the following configuration:

[policy.llm]
_type = "openai"
model_name = "gpt-3.5-turbo-16k" # Change to gpt-3.5-turbo-16k
openai_api_key = "" # Fill in with real information
temperature = 0

Start the Service

ailingbot serve

Finally, we need to go to the Slack admin console to configure the webhook address. The webhook URL for Slack is: http(s)://your_public_IP:8080/webhook/slack/event/

After completing the above configuration, you can find the chatbot in Slack and start chatting:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/slack-screenshot.png" alt="Slack chatbot" width="1000"/> </p>

📖User Guide

Main Process

The main processing flow of AilingBot is as follows:

<p align="center"> <img src="https://raw.githubusercontent.com/ericzhang-cn/ailingbot/main/img/flow.png" alt="Main Process" width="500"/> </p>
  1. First, the user sends a message to the IM bot.
  2. If a webhook is configured, the instant messaging tool will forward the request sent to the bot to the webhook service address.
  3. The webhook service processes the original IM message and converts it into AilingBot's internal message format, which is then sent to ChatBot.
  4. ChatBot processes the request and forms a response message based on the configured chat policy. During this process, ChatBot may perform operations such as requesting a large language model, accessing a vector database, or calling an external API to complete the request processing.
  5. ChatBot sends the response message to the IM Agent. The IM Agent is responsible for converting the AilingBot internal response message format into a specific IM format and calling the IM open capability API to send the response message.
  6. The IM bot displays the message to the user, completing the entire processing process.

Main Concepts

Configuration

Configuration Methods

AilingBot can be configured in two ways:

💡 Both configuration files and environment variables can be used together. If a configuration item exists in both, the environment variable takes precedence.

Configuration Mapping

All configurations have the following mappings between TOML keys and environment variables:

For example:

Configuration Items

General

Configuration ItemDescriptionTOMLEnvironment Variable
LanguageLanguage code (Reference: http://www.lingoes.net/en/translator/langcode.htm)langAILINGBOT_LANG
TimezoneTimezone code (Reference: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)tzAILINGBOT_TZ
Policy NamePredefined policy name or complete policy class pathpolicy.nameAILINGBOT_POLICY__NAME
Channel NamePredefined channel namechannel.nameAILINGBOT_CHANNEL__NAME
Webhook PathComplete class path of non-predefined channel webhookchannel.webhook_nameAILINGBOT_CHANNEL__WEBHOOK_NAME
Agent PathComplete class path of non-predefined channel agentchannel.agent_nameAILINGBOT_CHANNEL__AGENT_NAME
Uvicorn ConfigAll uvicorn configurations (Reference: uvicorn settings). These configurations will be passed to uvicornuvicorn.*AILINGBOT_UVICORN__*

Configuration example:

lang = "zh_CN"
tz = "Asia/Shanghai"

[policy]
name = "conversation"
# More policy configurations

[channel]
name = "wechatwork"
# More channel configurations

[uvicorn]
host = "0.0.0.0"
port = 8080

Built-in Policy Configuration

conversation

Conversation uses LangChain's Conversation as the policy, which enables direct interaction with LLM and has a conversation history context, enabling multi-turn conversations.

Configuration ItemDescriptionTOMLEnvironment Variable
History SizeIndicates how many rounds of historical conversations to keeppolicy.history_sizeAILINGBOT_POLICY__HISTORY_SIZE

Configuration example:

# Use the conversation policy and keep 5 rounds of historical conversations
[policy]
name = "conversation"
history_size = 5
document_qa

Document_qa uses LangChain's Stuff as the policy. Users can upload a document and then ask questions based on the document content.

Configuration ItemDescriptionTOMLEnvironment Variable
Chunk SizeCorresponds to LangChain Splitter's chunk_sizepolicy.chunk_sizeAILINGBOT_POLICY__CHUNK_SIZE
Chunk OverlapCorresponds to LangChain Splitter's chunk_overlappolicy.chunk_overlapAILINGBOT_POLICY__CHUNK_OVERLAP

Configuration example:

# Use the document_qa policy, with chunk_size and chunk_overlap set to 1000 and 0, respectively
[policy]
name = "document_qa"
chunk_size = 1000
chunk_overlap = 0

Model Configuration

The model configuration is consistent with LangChain. The following is an example.

OpenAI
[policy.llm]
_type = "openai" # Corresponding environment variable: AILINGBOT_POLICY__LLM___TYPE
model_name = "gpt-3.5-turbo" # Corresponding environment variable: AILINGBOT_POLICY__LLM__MODEL_NAME
openai_api_key = "sk-pd*****************************aAb" # Corresponding environment variable: AILINGBOT_POLICY__LLM__OPENAI_API_KEY

Command Line Tools

Initialize Configuration File (init)

Usage

The init command generates a configuration file settings.toml in the current directory. By default, the user will be prompted interactively. You can use the --silence option to generate the configuration file directly using default settings.

Usage: ailingbot init [OPTIONS]

Initialize the AilingBot environment.

Options:
--silence    Without asking the user.
--overwrite  Overwrite existing file if a file with the same name already
exists.
--help       Show this message and exit.

Options

OptionDescriptionTypeRemarks
--silenceGenerate the default configuration directly without asking the user.Flag
--overwriteAllow overwriting the settings.toml file in the current directory.Flag

View Current Configuration (config)

The config command reads the current environment configuration (including the configuration file and environment variables) and merges them.

Usage

Usage: ailingbot config [OPTIONS]

  Show current configuration information.

Options:
  -k, --config-key TEXT  Configuration key.
  --help                 Show this message and exit.

Options

OptionDescriptionTypeRemarks
-k, --config-keyConfiguration keyStringIf not passed, the complete configuration information will be displayed.

Start Command Line Bot (chat)

The chat command starts an interactive command-line bot for testing the current chat policy.

Usage

Usage: ailingbot chat [OPTIONS]

  Start an interactive bot conversation environment.

Options:
  --debug  Enable debug mode.
  --help   Show this message and exit.

Options

OptionDescriptionTypeRemarks
--debugEnable debug modeFlagThe debug mode will output more information, such as the prompt.

Start Webhook Service (serve)

The serve command starts a Webhook HTTP server for interacting with specific IM.

Usage

Usage: ailingbot serve [OPTIONS]

  Run webhook server to receive events.

Options:
  --log-level [TRACE|DEBUG|INFO|SUCCESS|WARNING|ERROR|CRITICAL]
                                  The minimum severity level from which logged
                                  messages should be sent to(read from
                                  environment variable AILINGBOT_LOG_LEVEL if
                                  is not passed into).  [default: TRACE]
  --log-file TEXT                 STDOUT, STDERR, or file path(read from
                                  environment variable AILINGBOT_LOG_FILE if
                                  is not passed into).  [default: STDERR]
  --help                          Show this message and exit.

Options

OptionDescriptionTypeRemarks
--log-levelThe minimum severity level from which logged messages should be sent to.StringBy default, all log levels will be displayed (TRACE).
--log-fileThe location where logs are output.StringBy default, logs will be output to standard error (STDERR).

Start API Service (api)

The api command starts the API HTTP server.

Usage

Usage: ailingbot api [OPTIONS]

Run endpoint server.

Options:
  --log-level [TRACE|DEBUG|INFO|SUCCESS|WARNING|ERROR|CRITICAL]
                                  The minimum severity level from which logged
                                  messages should be sent to(read from
                                  environment variable AILINGBOT_LOG_LEVEL if
                                  is not passed into).  [default: TRACE]
  --log-file TEXT                 STDOUT, STDERR, or file path(read from
                                  environment variable AILINGBOT_LOG_FILE if
                                  is not passed into).  [default: STDERR]
  --help                          Show this message and exit.

Options

OptionDescriptionTypeRemarks
--log-levelDisplay log level, which will display logs at this level and aboveStringBy default, all levels are displayed (TRACE)
--log-fileLog output locationStringBy default, logs are printed to standard error (STDERR)

🔌API

TBD

💻Development Guide

Development Guidelines

TBD

Developing Chat Policy

TBD

Developing Channel

TBD

🤔Frequently Asked Questions

🎯Roadmap