An AI research companion for World of Warcraft โ context-aware, multi-provider, and journal-based.
AzerothLM bridges the game's sandboxed Lua environment with modern Large Language Models. It reads your character's actual state โ gear, professions, quests, reputations, zone, gold โ and makes that context available to an AI that answers specific, actionable questions about your character, not generic ones.
Pre-raid gear planning
You're a level 60 Hunter sitting in Stormwind with a mix of dungeon blues and quest greens. AzerothLM reads every equipped slot and tells you exactly which pieces to upgrade first, which quest rewards you're about to miss, and which dungeon bosses to prioritize โ without you typing a single item name.
Farming route optimization
Mining 300, Skinning 306, currently questing in Hellfire Peninsula with 14 active quests. Ask AzerothLM for the best gold-per-hour route and it answers with specifics โ which nodes, which mobs, which respawn paths โ tailored to your professions and current location.
Reputation & attunement planning
Hostile with The Aldor, Neutral with The Sha'tar, and trying to figure out what to do next. AzerothLM traces your attunement chain, maps out the rep grind, and identifies which of your active quests feed into it โ all from your actual standings and quest log.
WoW addons run in a sandboxed Lua environment with no internet access. AzerothLM bridges this gap using a file-based relay:
- Context Collection โ The addon scans your equipped gear, profession levels, active quests, reputation standings, and talent points into
SavedVariables. - Research Input โ You create topics and ask questions through the relay CLI or MCP tools.
- AI Processing โ The relay sends your question plus full character context to an LLM via LiteLLM, supporting multiple providers.
- Signal File โ The relay writes the AI response back to
AzerothLM_Signal.lua. - In-Game Sync โ Type
/reloadin-game to load the updated journal.
- ๐ค Context-Aware AI โ Reads your equipped gear (all 19 slots), profession levels, active quest log, reputation standings, talent distribution, zone, level, class, and gold to give character-specific answers.
- ๐ Research Journal โ Organize questions into named topics with full multi-turn Q&A history. The AI remembers the conversation within each topic.
- ๐ Multi-Provider Support โ Switch between Google Gemini, OpenAI, Anthropic Claude, or local Ollama models at runtime. No restart required.
- ๐ฅ๏ธ Interactive CLI โ Rich terminal interface with commands for topic management, model switching, and diagnostics.
- ๐ MCP Server Mode โ Run as a Model Context Protocol server for integration with Claude Code or any MCP-compatible AI agent.
- ๐ฎ In-Game Journal Viewer โ Draggable, scrollable frame with topic navigation, item quality colorization, right-click context menus, and mouse wheel support.
- โ๏ธ Runtime Configuration โ Add API keys (
/model add), switch models (/model switch), and toggle test mode (/test on|off) without editing files. - ๐งช Test Mode โ Validate your configuration and test the full pipeline without consuming API credits.
- ๐ Response Caching โ Identical queries return instantly from cache, saving API usage and latency.
- ๐ Debug Mode โ
--debugflag orDEBUG=truein.envenables detailed diagnostic logging to help troubleshoot any issues.
- Game: World of Warcraft TBC Classic / Anniversary Edition (Interface version 20505)
- Runtime: Python 3.10+
- Python Libraries:
litellm,python-dotenv,rich,mcp,pyfiglet - API Key: Google Gemini (recommended โ free tier available), OpenAI, Anthropic, or a local Ollama instance
Copy the AzerothLM folder into your WoW AddOns directory:
World of Warcraft/_anniversary_/Interface/AddOns/AzerothLM/
pip install -r requirements.txtCopy .env.example to .env and configure:
cp .env.example .envEdit .env with your paths and API key:
# Path to your account's SavedVariables file
WOW_SAVED_VARIABLES_PATH=C:\...\WTF\Account\YOUR_ACCOUNT\SavedVariables\AzerothLM.lua
# Path to the installed addon folder
WOW_ADDON_PATH=C:\...\Interface\AddOns\AzerothLM
# Add at least one API key (or use /model add at runtime)
GEMINI_API_KEY=your_key_here
MODEL_NAME=gemini/gemini-2.5-flash- Log into WoW, type
/alm scanto collect your character context, then/reload - Start the relay:
python AzerothLM_Relay.py - Create your first topic:
/new gear-upgrades - Ask a question:
/ask gear-upgrades What should I upgrade first? - Type
/reloadin-game to see the response in the journal
python AzerothLM_Relay.py| Command | Description |
|---|---|
/new <title> |
Create a new research topic |
/ask <slug> <question> |
Ask a question on a topic |
/topics |
List all topics |
/view <slug> |
View full Q&A history for a topic |
/delete <slug> |
Delete a topic |
/model |
Show configured providers and active model |
/model add |
Add a new provider API key interactively |
/model switch |
Switch to a different model at runtime |
/test on|off |
Toggle test mode (mock responses, no API cost) |
/context |
Show current character context |
/usage |
Show API usage stats and token counts |
/status |
Show relay configuration |
/help |
Show all commands |
/quit |
Exit the relay |
AzerothLM can run as a Model Context Protocol server, exposing its research journal tools to any MCP-compatible AI agent โ Claude Code, custom apps, or your own tooling.
python AzerothLM_Relay.py --mcp
# With diagnostic logging:
python AzerothLM_Relay.py --mcp --debug 2>>debug.logAdd to your .mcp.json:
{
"mcpServers": {
"azerothlm": {
"command": "python",
"args": ["path/to/AzerothLM_Relay.py", "--mcp"]
}
}
}Once connected, Claude Code can call all six tools directly in conversation โ creating topics, asking character-aware questions, and managing your journal without the CLI.
| Tool | Description | Parameters |
|---|---|---|
create_topic |
Create a new research topic in the journal | title: str |
ask_question |
Ask a question with full character context and topic history | topic_slug: str, question: str |
list_topics |
List all topics with entry counts and last-updated timestamps | โ |
get_topic |
Retrieve the full Q&A history for a topic | topic_slug: str |
get_character_context |
Read live character data from WoW SavedVariables | โ |
delete_topic |
Delete a topic and sync the removal to the in-game journal | topic_slug: str |
get_character_context returns a JSON object with everything the addon has collected. Every ask_question call includes this automatically.
| Category | Data |
|---|---|
| Player | Level, class, race, current zone and subzone, gold on hand |
| Gear | All 19 equipment slots with item name and ID (Head through Tabard) |
| Professions | All professions and weapon skills with current rank and max rank |
| Quests | Active quest log with quest ID, title, level, and completion status |
| Reputations | All tracked factions with current standing (Hostile โ Exalted) |
| Talents | Talent point distribution across all three trees |
After using the CLI to ask questions, type /reload to sync responses into the journal.
| Command | Description |
|---|---|
/alm |
Toggle the journal window |
/alm scan |
Refresh character context (gear, professions, quests, reputations) |
/alm refresh |
Reload UI shortcut |
/alm topics |
List all topics in chat |
/alm delentry <N> |
Delete entry N from the current topic |
/alm reset |
Clear and rebuild the journal from the latest relay data |
/alm wipe |
Wipe all journal data and queue deletions on the relay side |
/alm help |
Show all in-game commands |
Validate your setup without consuming API credits:
/test on โ enable test mode, run configuration checks
/test off โ disable test mode
When active, all AI responses are replaced with mock data prefixed [TEST MODE]. The /test on command also runs a full diagnostic verifying your .env, API key, SavedVariables path, and addon path.
- ๐ MIT License added โ AzerothLM is now open source
- โจ Stale model IDs updated in
.env.example(Anthropic, OpenAI, Gemini) - โจ Requirements pinned to minimum versions for reproducible installs
- ๐ Fixed user-facing "MCP tools" jargon โ messages now say "relay CLI"
- ๐ Context-aware prompts โ questions route to relevant character data sections only
- ๐ First-run interactive path setup โ relay guides new users through
.envconfiguration - ๐ Help system rework โ
/helpshows category table,/help <cmd>shows detail panel - ๐ Fixed response cache stale hits โ history fingerprint included in cache key
- ๐ In-game journal management commands:
/alm reset,/alm wipe,/alm help - ๐ Item quality colorization in the journal viewer โ gear names display in their rarity color
- ๐ Richer character context: reputation standings and talent distribution now included
- ๐ Debug mode:
--debugstartup flag andDEBUG=trueenv var for diagnostic logging - โจ CLI UX improvements: ASCII gradient header, loading spinners, cleaner output
- โจ Improved error messages and input validation throughout the CLI
- โจ Contextual hints when a topic slug isn't found
- ๐ Fixed signal sync edge cases โ empty journal now correctly signals in-game cleanup
- ๐ Fixed item quality pattern matching โ more lenient parsing with name-based fallback
- ๐ Multi-provider LLM support: Google Gemini, OpenAI, Anthropic, local Ollama
- ๐ Interactive CLI with
/model add,/model switch,/test on|off,/status,/usage - ๐ MCP server mode for AI agent integration
- ๐ Research journal with named topics and full multi-turn Q&A history
- ๐ In-game journal viewer: draggable frame, topic tabs, right-click menus, mouse wheel scroll
- ๐ Response caching and built-in rate limiting with exponential backoff
- ๐ Air-gap bridge architecture: file-based relay between WoW sandbox and external AI
AzerothLM is in public beta โ core features are stable but rough edges exist. If you hit a bug, have a suggestion, or want to share how you're using it, open an issue on GitHub.


