Generate structured, JSON-valid RPG quest content (title, description, NPC, objectives, rewards) using open LLMs.
This project demonstrates how to steer an LLM to reliably emit a single JSON object that conforms to a lightweight quest schema. It applies prompt design + post‑processing heuristics to coerce slightly messy model output into structured data.
- Structured quest schema (title, short description, NPC, objectives, rewards)
- Automatic JSON sanitation & validation (removes fences, normalizes quotes, strips trailing commas)
- Retry loop with adaptive reminder if the model drifts from spec
- Choice of Hugging Face text-generation models (default:
Qwen/Qwen3-1.7B) - Interactive or fully non-interactive CLI
- Option to return dataclass-converted output
ai_quest_generator.py # Backward-compatible shim (delegates to package CLI)
quest_generator/
__init__.py # Public exports (generate_quest, NPC, Quest, load_generator)
cli.py # CLI argument parsing + main()
models.py # Dataclasses (NPC, Quest)
prompts.py # Prompt template + builder
loader.py # Model loader (Hugging Face pipeline)
parsing.py # JSON sanitation & validation helpers
generators.py # Core generation & retry logic
requirements.txt # Runtime dependencies
README.md # Documentation
LICENSE # MIT License
The code has been modularized for clarity and future extension (tests, alternate backends, web API). The legacy script path still works.
- Python 3.10+
- pip (or uv / pipx if you prefer)
- Sufficient RAM/VRAM for chosen model (small CPU-friendly models recommended first)
Optional (performance):
- GPU with recent CUDA drivers (for faster inference with
torch+accelerate)
git clone https://github.com/tsonkov/quest_generator.git
cd quest_generator
python -m venv .venv
source .venv/bin/activate # Windows PowerShell: .venv\Scripts\Activate.ps1
pip install -r requirements.txt
python ai_quest_generator.py --setting "clockwork desert city" --tone "hopeful, adventurous" --seed "mechanical sphinx, lost caravan" --model "Qwen/Qwen3-1.7B" --attempts 2Interactive mode (prompts you for missing args):
python ai_quest_generator.pyPackage CLI invocation (equivalent):
python -m quest_generator.cli --setting "clockwork desert city" --tone "hopeful, adventurous" --seed "mechanical sphinx, lost caravan"{
"quest": {
"title": "Secrets of the Sandwound Orrery",
"short_description": "Within the brass-veined dunes, a silent orrery ticks beneath glass domes...",
"npc": {
"name": "Sira Coilwright",
"role": "Clockwork cartographer",
"short_dialogue": "The desert keeps time better than any cathedral—if you learn to read it."
},
"objectives": [
"Recover the fallen gear-plates from the dust gullies",
"Align the stellar spindles inside the buried orrery",
"Decode the caravan's fractured route cipher"
],
"rewards": [
"Star-etched compass",
"Reputation with desert caravans",
"Chart of hidden wind tunnels"
]
},
"attempts": 1
}| Argument | Required | Default | Description |
|---|---|---|---|
--setting |
Yes* | — | World / locale description. Required unless interactive. |
--tone |
Yes* | — | Stylistic tone (comma-separated adjectives). |
--seed |
Yes* | — | Seed elements (comma-separated nouns/phrases). |
--model |
No | Qwen/Qwen3-1.7B |
Hugging Face model ID. |
--max-new-tokens |
No | 300 | Generation length cap. |
--temperature |
No | 0.8 | Sampling temperature. |
--top-p |
No | 0.95 | Nucleus sampling probability mass. |
--attempts |
No | 3 | Max retries seeking valid JSON. |
--raw |
No | False | If parse fails, print raw output. |
--dataclass |
No | False | Convert to internal dataclasses before dumping. |
*Not required if you let the program prompt you interactively (TTY only).
- Builds a strict prompt with an inline JSON schema.
- Calls a Hugging Face
text-generationpipeline. - Attempts to parse output directly; if it fails, applies sanitation (remove fences, normalize quotes, trim trailing commas).
- Validates required keys & types.
- Retries with a light appended reminder if invalid.
- Returns structured JSON plus how many attempts were needed.
- Small models (<2B params) are fastest but may hallucinate JSON formatting more.
- Increase
--attemptsfor larger / less instruction-tuned models. - Consider installing
accelerateand running on GPU for larger models. - To experiment with different models:
--model mistralai/Mistral-7B-Instruct-v0.2(ensure you have hardware + license acceptance where needed).
| Issue | Cause | Mitigation |
|---|---|---|
| Out-of-memory | Model too large | Try a smaller instruct model or add --max-new-tokens 180 |
| Repeated invalid JSON | Model not instruction-tuned | Use a chat/instruct variant; raise attempts |
| Slow startup | Model weight download | It's a one-time cache in ~/.cache/huggingface |
| Unicode errors in console | Windows codepage | Run chcp 65001 or use a modern terminal |
- (DONE) Modularize into package
- Add unit tests with a fake LLM client
- Introduce
pyproject.toml/ packaging & publish to PyPI - GitHub Actions workflow (lint + tests + example generation artifact)
- Pluggable ranking / scoring of multiple candidate quests
- Export formats: Markdown, HTML, YAML
- Additional backends (OpenAI, local vLLM server) behind interface
- Optional FastAPI microservice / simple web UI
Prototype phase: feel free to open issues with ideas or model compatibility notes. After modularization, PR guidelines & tests will be added.
Released under the MIT License – see LICENSE.
Why JSON instead of Markdown? Structured JSON is easier to post-process or plug into a game toolchain.
Why does the prompt mention code fences but we still strip them? Some models insist on wrapping JSON; the sanitation phase is defensive.
Can I remove the retries? Yes—call generate_quest with attempts=1 for speed, at the cost of more parse failures.
Inside Python:
from quest_generator import load_generator, generate_quest
gen = load_generator("Qwen/Qwen3-1.7B")
result = generate_quest(
setting="floating crystal archipelago",
tone="mystical, hopeful",
seed="shard monk, tidal engine",
generator=gen,
attempts=2,
)
print(result["quest"]["title"], result["attempts"])If you cloned the repo and want editable (future) packaging support you can later:
pip install -e . # after adding pyproject.toml- Hugging Face Transformers team
- Open source instruction-tuned model authors
Feel free to suggest additional sections or improvements via Issues.