Home

Awesome

<h1 align="center"> llmcord </h1> <h3 align="center"><i> Talk to LLMs with your friends! </i></h3> <p align="center"> <img src="https://github.com/jakobdylanc/llmcord/assets/38699060/789d49fe-ef5c-470e-b60e-48ac03057443" alt=""> </p>

llmcord turns Discord into a collaborative LLM frontend. It works with practically any LLM, remote or locally hosted.

Features

Reply-based chat system

Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!

You can do things like:

Additionally:

Choose any LLM

llmcord supports remote models from:

Or run a local model with:

...Or use any other OpenAI compatible API server.

And more:

Instructions

  1. Clone the repo:

    git clone https://github.com/jakobdylanc/llmcord
    
  2. Create a copy of "config-example.yaml" named "config.yaml" and set it up:

Discord settings:

SettingDescription
bot_tokenCreate a new Discord bot at discord.com/developers/applications and generate a token under the "Bot" tab. Also enable "MESSAGE CONTENT INTENT".
client_idFound under the "OAuth2" tab of the Discord bot you just made.
status_messageSet a custom message that displays on the bot's Discord profile. Max 128 characters.
allow_dmsSet to false to disable direct message access.<br />(Default: true)
allowed_channel_idsA list of Discord channel IDs where the bot can be used. Also accepts category IDs. Leave empty to allow all channels. Does not affect DMs.
allowed_role_idsA list of Discord role IDs that can use the bot. Leave empty to allow everyone. DMs are force-disabled when at least one role is specified.
blocked_user_idsA list of Discord user IDs that are blocked from using the bot.
max_textThe maximum amount of text allowed in a single message, including text from file attachments.<br />(Default: 100,000)
max_imagesThe maximum number of image attachments allowed in a single message. Only applicable when using a vision model.<br />(Default: 5)
max_messagesThe maximum number of messages allowed in a reply chain. When exceeded, the oldest messages in the reply chain are dropped.<br />(Default: 25)
use_plain_responsesWhen set to true the bot will use plaintext responses instead of embeds. Plaintext responses have a shorter character limit so the bot's messages may split more often. Also disables streamed responses and warning messages.<br />(Default: false)

LLM settings:

SettingDescription
providersAdd the LLM providers you want to use, each with a base_url and optional api_key entry. Popular providers (openai, ollama, etc.) are already included. Only supports OpenAI compatible APIs.
modelSet to <provider name>/<model name>, e.g:<br /><br />-openai/gpt-4o<br />-ollama/llama3.3<br />-openrouter/anthropic/claude-3.5-sonnet
extra_api_parametersExtra API parameters for your LLM. Add more entries as needed. Refer to your provider's documentation for supported API parameters.<br />(Default: max_tokens=4096, temperature=1.0)
system_promptWrite anything you want to customize the bot's behavior! Leave blank for no system prompt.
  1. Run the bot:

    No Docker:

    python -m pip install -U -r requirements.txt
    python llmcord.py
    

    With Docker:

    docker compose up
    

Notes

Star History

<a href="https://star-history.com/#jakobdylanc/llmcord&Date"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=jakobdylanc/llmcord&type=Date&theme=dark" /> <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=jakobdylanc/llmcord&type=Date" /> <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=jakobdylanc/llmcord&type=Date" /> </picture> </a>