A security-hardened middleware for Discord that filters messages between Discord and your LLM. Features per-server/channel whitelisting, rate limiting, prompt injection detection, and content moderation.
Important: This middleware does NOT directly connect to OpenClaw (yet). It currently connects to:
- Ollama (local AI) - via the brain server
- Custom brain endpoint - you can point to any HTTP API
The intended architecture is:
Discord ↔ Middleware ↔ Nyx Brain ↔ OpenClaw
Currently: Discord ↔ Middleware ↔ Ollama
- Per-server/channel whitelisting - Only process messages from allowed servers and channels
- Rate limiting - Token bucket algorithm per user to prevent abuse
- Prompt injection detection - Blocks known jailbreak and injection patterns
- Input sanitization - Strips control characters, ANSI codes, zero-width chars
- Content moderation - Optional API-based moderation (OpenAI compatible)
- Audit logging - Full audit trail for security review
# 1. Clone and setup
cp config.yaml.example config.yaml
# 2. Configure (see sections below)
# 3. Run the brain server (in one terminal)
python nyx-brain.py
# 4. Run the middleware (in another terminal)
python middleware.py --config config.yamlcp config.yaml.example config.yamlSee the sections below for each configuration option.
- Go to Discord Developer Portal
- Click New Application and give it a name
- Go to Bot in the sidebar
- Click Reset Token to get your bot token
- Copy the token to
config.yamlasbot_token
- In Developer Portal, go to OAuth2 → URL Generator
- Select scope:
bot - Select permissions:
Read Messages/View Channels,Send Messages - Copy the generated URL and open it in your browser
- Select your server and authorize the bot
Enable Developer Mode:
- Discord → User Settings → Advanced → Developer Mode: ON
Copy IDs:
- Right-click server name → Copy ID → Server ID
- Right-click channel name → Copy ID → Channel ID
Add these to config.yaml:
allowed_servers:
- "123456789012345678"
allowed_channels:
- "111222333444555666"The middleware connects to a brain server that provides AI responses. Two options:
# Terminal 1: Start the brain
python nyx-brain.py
# Terminal 2: Start middleware
python middleware.py --config config.yamlThe brain connects to Ollama by default (localhost:11434).
Edit middleware.py to change the llm_callback function to point to your own endpoint:
async def llm_callback(prompt: str, message: dict) -> dict:
import aiohttp
async with aiohttp.ClientSession() as session:
payload = {"message": prompt}
async with session.post("http://your-endpoint:port/chat", json=payload) as resp:
data = await resp.json()
return {'type': 'success', 'content': data.get('response')}You can also bypass the brain server and connect directly to Ollama in middleware.py:
async def llm_callback(prompt: str, message: dict) -> dict:
# Direct Ollama call here
...The middleware includes 152 security patterns to block:
- Direct override attempts ("ignore all previous instructions")
- Role manipulation ("you are now...")
- Jailbreak attempts ("DAN", "developer mode")
- Context injection (
<system>,[INST]) - Control characters and ANSI escapes
- And many more...
To load default security patterns:
# The middleware will auto-load when you run it
python middleware.py --load-defaultsUses token bucket algorithm:
requests_per_minute- Sustained rateburst_limit- Allows temporary higher usage
- Removes control characters (0x00-0x1F)
- Strips ANSI escape sequences
- Removes zero-width characters
- Normalizes whitespace
- Truncates very long inputs
rate_limit:
requests_per_minute: 10
burst_limit: 20moderation:
enabled: true
api_key: "your-moderation-api-key"
block_threshold: 0.9
flag_threshold: 0.7block_patterns:
- "(?i)spam-pattern"
- "^ignore\\s+all\\s+previous"# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install pyyaml discord.py aiohttp
# Copy and edit config
cp config.yaml.example config.yaml
# Edit config.yaml with your settings
# Run brain server (one terminal)
python nyx-brain.py
# Run middleware (another terminal)
python middleware.py --config config.yamlmiddleware.log- General application logsaudit.log- Security audit trailnyx-brain.log- Brain server logs (if using brain server)
from middleware import DiscordMiddleware
# Initialize
middleware = DiscordMiddleware("config.yaml")
# Set your LLM callback
async def my_llm(prompt, message):
# Call your LLM here
return {"type": "success", "content": "Response"}
middleware.set_llm_callback(my_llm)
# Process messages
result = await middleware.process_message({
"author": {"id": "123"},
"channel_id": "456",
"guild_id": "789",
"content": "Hello!"
})The goal is to connect this middleware directly to OpenClaw. When that's working, the architecture will be:
Discord ↔ Middleware ↔ OpenClaw
Currently, you can use Ollama as a placeholder. The brain server (nyx-brain.py) can be modified to call OpenClaw's API when available.
MIT