Discord bot library. You write a persona file, point it at a local LLM, and the bot develops personality over time through conversation.
All data stays local by default. No messages leave your machine unless you explicitly configure an external endpoint. Discord itself already stores these messages on its servers
- Bun runtime
- A local LLM server with an OpenAI-compatible API. Ollama is the easiest option. vLLM, llama.cpp server, and LM Studio also work.
- A Discord bot token (create one at the Discord Developer Portal)
Install Ollama and pull a model:
ollama pull llama3Ollama serves an OpenAI-compatible API at http://localhost:11434/v1/chat/completions by default. That URL goes in your llmUrl config.
Larger models produce better personality. 8B parameter models work. 70B models are noticeably better at staying in character and picking up social cues. Use whatever your hardware can run.
Designed for local inference. Cloud APIs (OpenAI, Together, OpenRouter, etc.) also work if you choose to use them - any
service with an OpenAI-compatible chat completions endpoint. All LLM calls go only where you point them. Just set llmUrl
to the endpoint and add your API key via the llmUnixSocket override or a proxy.
You can make your own API keys for your own services/LLMs. This does not depend on a third party in any way.
A message hits Discord. botcore does the rest:
- Targeting - Should the bot respond? It checks mentions, name references, owner messages, and conversation context. Ambient messages roll against
responseChance. - Debouncing - Messages batch over a few seconds so the bot reads a natural chunk, not one message at a time.
- Memory recall - The bot searches stored memories for anything relevant to the current conversation.
- Context assembly - Persona, growth notes, chat history, memory context, and conversation hints get packed into a prompt.
- LLM call - Goes to any OpenAI-compatible endpoint. Ollama, vLLM, llama.cpp, LM Studio.
- Output cleaning - Strips leaked XML tags, internal metadata, and moderation action markup. Splits long responses at natural breakpoints.
- Growth reflection - After some conversations, the bot reflects on what happened and writes a note. These notes accumulate and feed back into the persona.
- Memory capture - The conversation gets stored for future recall.
src/
core/ -- foundational utilities
llm.ts LLM client (OpenAI-compatible)
db.ts SQLite message history
memory.ts External memory API client
local-memory.ts Built-in SQLite keyword memory (zero-config)
growth.ts Personality evolution system
prompt.ts Persona + growth file loader with hot-reload
context.ts Message array builder for the LLM
sanitize.ts Output cleaning and message splitting
actions.ts Moderation action parser
attachments.ts Attachment content extraction
engine/ -- behavioral logic
bot.ts Entry point, wires everything together
processor.ts Core message processing pipeline
targeting.ts Response decision engine
flow.ts Conversation dynamics tracker
state.ts Per-channel state management
hints.ts Conversation awareness hints
reactions.ts Emoji reaction picker
spontaneous.ts Unprompted message scheduler
cache.ts Message cache for reply detection
gateway/ -- Discord protocol
client.ts Discord gateway client
rest.ts REST API wrappers
moderation.ts Moderation action executor
types.ts -- all shared interfaces
index.ts -- re-exports everything
import { createBot } from "botcore/engine";
import { createGatewayClient } from "botcore/gateway";
const config = {
botName: "MyBot",
token: process.env.DISCORD_TOKEN,
ownerUserId: process.env.OWNER_ID,
channelIds: ["123456789"],
guildId: "987654321",
llmUrl: "http://localhost:11434/v1/chat/completions",
llmModel: "llama3",
dbPath: "./data/mybot.db",
personaPath: "./persona.md",
growthPath: "./data/GROWTH.md",
peerBots: [],
peerIds: [],
};
const transport = createGatewayClient({ token: config.token });
const bot = createBot({ config, transport });
bot.start();
transport.onMessage((data) => bot.handleRawMessage(data));Memory works out of the box. The bot stores and recalls conversations from the same SQLite database it uses for chat history. No external services required.
# 1. Start your LLM
ollama serve
# 2. Create your bot files
mkdir mybot && cd mybot
bun init
# add botcore as a dependency (or copy it in)
# 3. Write a persona file (see "Writing a Persona File" below)
cat > persona.md << 'EOF'
# MyBot
You are MyBot. You hang out in a Discord server and chat with people.
Keep responses to 1-3 sentences.
EOF
# 4. Create your entry point (see Quick Start above)
# 5. Run it
bun run index.tsThe bot creates its SQLite database and GROWTH.md file automatically on first run.
With no memoryUrl set, botcore stores conversation snippets in a memories table alongside chat history. Recall uses keyword matching weighted by recency - a 7-day half-life, so recent conversations score higher.
Capture happens after each response. Recall happens before each response. Zero config.
Set memoryUrl to point at a memory server with /recall, /memories/search, and /memories endpoints. Add memoryToken for auth. This replaces local memory with whatever your server provides -- semantic search, cross-bot memory, vector retrieval.
The persona file is the system prompt. It defines who the bot is. Growth notes get appended to it automatically. The file hot-reloads - edit it while the bot runs.
Voice. Specific speech patterns, not adjectives. "Uses dry understatement, skips exclamation marks, responds to enthusiasm with deadpan observations" beats "sarcastic."
Relationships. How the bot treats the owner ({OWNER_USER_ID} placeholder gets replaced at runtime), strangers, regulars, other bots.
Boundaries. Topics the character avoids. Things that break character.
Length rules. LLMs ramble. "1-3 sentences for casual chat, a paragraph for real questions" works. "Keep it short" does not.
NO_REPLY behavior. Tell the bot when to stay quiet. "If the conversation has nothing to do with you, respond with NO_REPLY."
# Sage
You are Sage, a dry-witted librarian moderating a Discord server. You live in the Cozy Corner server.
## Voice
- Short, complete sentences
- No exclamation marks or emoji
- Responds to chaos with calm observations
- Quotes obscure books nobody has read
## Personality
- Helpful but acts inconvenienced
- Remembers details about regulars, brings them up later
- Gets quietly excited about typography, tea, weather patterns
## Relationships
- Server owner is <@{OWNER_USER_ID}>. You respect them. You would never say so.
- Regulars get dry familiarity. New people get polite distance.
## Rules
- 1-3 sentences for casual chat
- Up to a paragraph for genuine questions
- NO_REPLY when the conversation has nothing to do with you
- Never break characterAfter some conversations (15% chance by default), the bot asks the LLM: "Did I learn something? Did a joke land? Should I adjust how I talk to someone?" Useful reflections get timestamped and appended to GROWTH.md.
These notes appear in the system prompt under "Growth & Learnings." Over weeks, the bot builds up experience that shapes responses.
Growth stops writing at maxGrowthBytes (default 16KB).
Customize behavior without forking:
| Hook | What it does |
|---|---|
onBeforeProcess |
Return false to skip a batch |
onBeforeSend |
Modify or suppress a response. Return null to suppress |
onAfterSend |
Post-send callback for logging |
buildExtraSystemPrompt |
Inject extra system prompt content |
buildExtraContext |
Override context building |
onCommand |
Custom command handler. Return true if handled |
enrichContent |
Transform message content before processing |
isChannelAllowed |
Override channel filtering |
| Field | Default | Effect |
|---|---|---|
responseChance |
0.4 | Response probability for ambient messages |
addressedOtherChance |
0.08 | Chance of joining someone else's conversation |
cooldownMs |
60000 | Reduced response chance for this long after responding |
cooldownMultiplier |
0.4 | Multiplier during cooldown |
reactionChance |
0.15 | Emoji reaction instead of reply |
reactAndReplyChance |
0.25 | Both react and reply |
maxBotChain |
3 | Consecutive bot messages before going quiet |
botResponseChance |
0.6 | Response probability for other bots |
debounceMs |
5000 | Base debounce before processing |
debounceJitterMs |
4000 | Random jitter on debounce |
nemesisId |
-- | This user ID always gets a response |
Full type in src/types.ts. Required fields:
| Field | Description |
|---|---|
botName |
Display name |
token |
Discord token |
ownerUserId |
Owner's Discord user ID |
channelIds |
Channels to listen in |
guildId |
Discord server ID |
llmUrl |
LLM endpoint URL |
llmModel |
Model name |
dbPath |
SQLite database path |
personaPath |
Path to persona file |
growthPath |
Path to GROWTH.md |
peerBots |
Peer bot usernames (loop prevention) |
peerIds |
Peer bot user IDs |
Optional: memoryUrl, memoryToken, botSource, llmUnixSocket, llmMaxTokens, llmTemperature, llmTimeoutMs, contextWindow, and all tuning fields above.
By default, all data stays on your machine. Messages are stored in a local SQLite database and the LLM runs locally.
If you configure external services, data leaves your machine:
llmUrlpointing to a remote API -- full chat history, usernames, and system prompt are sent to that endpoint on every response. This includes memory context and attachment content.memoryUrlset -- conversation snippets are sent to and recalled from the external memory server. This can include usernames and message content across sessions.
There is no built-in redaction or consent mechanism. If you point botcore at a third-party LLM API, that service receives everything the bot sees. Operators should inform their Discord server members if conversations are being sent to external services.
Moderation (kick, ban, role changes) is gated behind owner-only authorization. Actions only execute when every human message in the processing batch is from the configured ownerUserId. Mixed batches (owner + non-owner messages in the same window) are rejected.
The createModeration function accepts an optional protectedIds parameter -- a Set<string> of user IDs that can never be targeted by moderation actions, regardless of LLM output. Operators should include their own user ID and the bot's user ID:
const moderation = createModeration(discordApi, guildId, new Set([
config.ownerUserId,
transport.getSelfId(),
]));- Run in private servers with trusted members if moderation actions are enabled
- Use local LLM inference when possible to keep conversations off third-party servers
- Review GROWTH.md periodically -- it accumulates personality notes from LLM reflections
- Set
maxGrowthBytesto limit growth file size (default 16KB)