RuleMindLLM is a local language model built using Ollama and ChromaDB that answers rule-based questions about popular games like Uno, Monopoly, and Poker.
It uses Retrieval-Augmented Generation (RAG) to provide accurate, grounded responses by leveraging vector similarity search on embedded chunks of official rulebooks.
- Runs the local LLM efficiently on your machine
 - Handles natural language understanding and response generation
 - Accepts external context passed from ChromaDB to ground responses in documents
 
- Converts game rule PDFs into vector embeddings
 - Stores embeddings for fast similarity search
 - Retrieves the most relevant rule chunks based on user queries
 - Supports RAG: provides context to the LLM at inference time
 
Ask:
"Can you stack a Draw Two in Uno?"
→ RuleMindLLM will return an answer directly from the Uno rulebook PDF, explaining whether stacking is officially allowed.
- Install Ollama and run 
ollama run gemma - Clone this repo
 - Load game rule PDFs → embed → store in ChromaDB
 - Ask questions via CLI or UI (under development)
 
LLMs are great at language, but they often hallucinate. RuleMindLLM anchors LLMs in real documents, making them practical assistants for rule-based domains like games.