NotchNet is an AI-powered Minecraft knowledge companion that uses RAG (Retrieval Augmented Generation) to answer questions about Minecraft and its mods. Check it out: https://github.com/aaravchour/notchnet-mod
- Local RAG Pipeline: Runs entirely on your machine using Ollama.
- Dynamic Wiki Fetching: Can fetch and index any MediaWiki-based wiki (e.g., RLCraft, Feed The Beast).
- Auto Mod Detection: Automatically finds and learns about installed mods when the game launches.
- Cloud Mode: Support for offloading AI inference to a remote server for low-end machines.
- Mod Awareness: Context-aware answers based on the loaded wikis.
- Python 3.10+
- Ollama installed and running.
- (For local model) 8GB of VRAM and 16GB of RAM for a smooth experience.
-
Clone the repository.
-
Run the startup script:
Windows:
start_local.bat
Mac/Linux:
chmod +x start_local.sh ./start_local.sh
This script will:
- Create a virtual environment and automatically install dependencies.
- Ask if you want to run Locally or use Cloud/Remote AI.
- Local: Pulls the necessary Ollama model (default:
llama3:8b). - Cloud: Configures connection to your remote Ollama instance.
- Start the API server.
-
Interact with the API: The server runs at
http://localhost:8000.Ask a Question:
curl -X POST http://localhost:8000/ask \ -H "Content-Type: application/json" \ -d '{"question": "How do I make a shield?"}'
You can teach NotchNet about new mods by fetching their wikis.
Endpoint: POST /admin/add-wiki
Example (Teaching it RLCraft):
curl -X POST http://localhost:8000/admin/add-wiki \
-H "Content-Type: application/json" \
-d '{
"api_url": "https://rlcraft.fandom.com/api.php",
"categories": ["Crafting", "Items", "Mobs"]
}'Note: This process runs in the background. It will fetch pages, clean them, rebuild the index, and reload the bot's memory.
See config.py for default settings. You can override them using environment variables or a .env file.
| Variable | Description | Default |
|---|---|---|
LOCAL_MODE |
Bypass API key checks for local use | true (in start script) |
LLM_MODEL |
Ollama model to use | llama3 |
OLLAMA_HOST |
URL of Ollama server | http://127.0.0.1:11434 |
GPLv3 License. See LICENSE for details.