Skip to content

jakobdylanc/discord-llm-chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llmcord.py

Talk to LLMs with your friends!

llmcord.py lets you and your friends chat with LLMs directly in your Discord server. It works with practically any LLM, remote or locally hosted.

Features

Reply-based chat system

Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains!

You can do things like:

  • Build conversations together with your friends
  • "Rewind" a conversation simply by replying to an older message
  • @ the bot while replying to any message in your server to ask a question about it

Additionally:

  • Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them.
  • You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue.

Choose any LLM

Supports remote models from OpenAI API, Mistral API, Anthropic API and many more thanks to LiteLLM.

Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server.

And more:

  • Supports image attachments when using a vision model (like gpt-4-turbo, claude-3, llava, etc.)
  • Customizable system prompt
  • DM for private access (no @ required)
  • User identity aware (OpenAI API only)
  • Streamed responses (turns green when complete, automatically splits into separate messages when too long, throttled to prevent Discord ratelimiting)
  • Displays helpful user warnings when appropriate (like "Only using last 20 messages" when the customizable message limit is exceeded)
  • Caches message data in a size-managed (no memory leaks) and per-message mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls
  • Fully asynchronous
  • 1 Python file, ~200 lines of code

Instructions

Before you start, install Python and clone this git repo.

  1. Install Python requirements: pip install -r requirements.txt

  2. Create a copy of ".env.example" named ".env" and set it up (see below)

  3. Run the bot: python llmcord.py (the invite URL will print to the console)

Setting Instructions
DISCORD_BOT_TOKEN Create a new Discord bot at discord.com/developers/applications and generate a token under the Bot tab. Also enable MESSAGE CONTENT INTENT.
DISCORD_CLIENT_ID Found under the OAuth2 tab of the Discord bot you just made.
LLM For LiteLLM supported providers (OpenAI API, Mistral API, ollama, etc.), follow the LiteLLM instructions for its model name formatting.

For local models (running on an OpenAI compatible API server), set to local/openai/model. If using a vision model, set to local/openai/vision-model. Some setups will instead require local/openai/<MODEL_NAME> where <MODEL_NAME> is the exact name of the model you're using.
LLM_MAX_TOKENS The maximum number of tokens in the LLM's chat completion.
(Default: 1024)
LLM_TEMPERATURE LLM sampling temperature. Higher values make the LLM's output more random.
(Default: 1.0)
LLM_TOP_P LLM nucleus sampling value. Alternative to sampling temperature. Higher values make the LLM's output more diverse.
(Default: 1.0)
CUSTOM_SYSTEM_PROMPT Write practically anything you want to customize the bot's behavior!
CUSTOM_DISCORD_STATUS Set a custom message that displays on the bot's Discord profile. Max 128 characters.
ALLOWED_CHANNEL_IDS Discord channel IDs where the bot can send messages, separated by commas. Leave blank to allow all channels.
ALLOWED_ROLE_IDS Discord role IDs that can use the bot, separated by commas. Leave blank to allow everyone. Specifying at least one role also disables DMs.
MAX_IMAGES The maximum number of image attachments allowed in a single message. Only applicable when using a vision model.
(Default: 5)
MAX_MESSAGES The maximum number of messages allowed in a reply chain.
(Default: 20)
LOCAL_SERVER_URL The URL of your local API server. Only applicable when using a local model.
(Default: http://localhost:5000/v1)
LOCAL_API_KEY The API key to use with your local API server. Only applicable when using a local model. Usually safe to leave blank.
OOBABOOGA_CHARACTER Your oobabooga character that you want to use. Only applicable when using oobabooga. Leave blank to use CUSTOM_SYSTEM_PROMPT instead.
OPENAI_API_KEY Only required if you choose a model from OpenAI API. Generate an OpenAI API key at platform.openai.com/account/api-keys. You must also add a payment method to your OpenAI account at platform.openai.com/account/billing/payment-methods.
MISTRAL_API_KEY Only required if you choose a model from Mistral API. Generate a Mistral API key at console.mistral.ai/api-keys. You must also add a payment method to your Mistral account at console.mistral.ai/billing.

OPENAI_API_KEY and MISTRAL_API_KEY are provided as examples. Add more as needed for other LiteLLM supported providers.

Notes

  • Only models from OpenAI API are user identity aware because only OpenAI supports the message "name" property. Hopefully others support this in the future.

  • PRs are welcome :)

Star History

Star History Chart