WarGames turns OpenRA Red Alert into a computer-use environment for agentic AI. An agent receives pixels and a small CUA tool set, then sends mouse/keyboard/wait actions back to the simulator.
The runtime never calls an LLM and never trains a model. It does three things: capture frames, apply tool calls, and compute rewards from private simulator state. Your agent or external harness owns model calls. Prime/prime-rl owns gradient updates.
This is a short Kimi K2.5 smoke run. The agent receives screenshots, chooses CUA actions, and WarGames applies them to the live OpenRA window.
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtRed Alert needs a working OpenRA checkout:
export LAYERBRAIN_WARGAMES_REDALERT_OPENRA_ROOT=/path/to/openra-source
export LAYERBRAIN_WARGAMES_REDALERT_OPENRA_BINARY=/path/to/openra-source/launch-game.shCreate local.env from the template. local.env is gitignored.
cp local.env.example local.envUse provider-standard names for model keys:
OPENAI_API_KEY=
OPENAI_BASE_URL=
OPENAI_MODEL=
ANTHROPIC_API_KEY=
ANTHROPIC_MODEL=
GOOGLE_API_KEY=
GOOGLE_MODEL=LAYERBRAIN_PRIME is a publish/admin key only. WarGames does not use it for
model inference.
Tasks are mission + seed + split + reward profile.
wargames tasks --game redalert --split debugSplits:
debug: tiny smoke taskstrain: tasks agents may learn fromvalidation: tune prompts/profile weights/max stepstest: held-out reported benchmark taskscurriculum: ordered train tasks
The catalog rejects the same (mission_id, seed) appearing in multiple splits.
It also rejects train_only reward profiles on test.
Agents are named YAML configs under agents/ or your own --agent-dir.
wargames agents list
wargames agents validate agents/scripted-wait.yamlExample:
id: my-agent
driver: python
factory: my_project.agent:create_agent
provider: openai
model: ${OPENAI_MODEL}
api_key_env: OPENAI_API_KEY
base_url: ${OPENAI_BASE_URL}
config:
temperature: 0.2
top_p: 0.9
max_tokens: 256
timeout_seconds: 20
disable_reasoning: false
reject_reasoning_models: false
reasoning_effort: medium
extra_body:
enable_thinking: true
chat_template_kwargs:
enable_thinking: trueThe Python factory receives the AgentSpec and returns an object implementing:
async def start(task): ...
async def decide(obs): ...
async def close(): ...For OpenAI-compatible providers, config is passed through to the local
agent wrapper. Use it to choose model behavior per run. For fast non-thinking
smoke runs, set disable_reasoning: true and keep max_tokens small. For
models that need internal thinking, set disable_reasoning: false and pass the
provider-specific extra_body they require. WarGames does not own those keys or
settings; the agent config does.
wargames run \
--task redalert.debug.smoke.seed-000000 \
--agent scripted-wait \
--watch none \
--record summary_onlyFor demo/debug runs, record frames and export video later:
wargames run \
--task redalert.debug.smoke.seed-000000 \
--agent scripted-wait \
--watch window \
--record full \
--video frames
wargames export <run_id> --out exports --video mp4MP4 is export-only. Runs write frames; export turns frames into a shareable video.
List profiles:
wargames profile list --game redalertBuilt-ins:
terminal: win/loss onlystandard: terminal + mild dense shapingdense: training-only dense profileprotective: defense-aligned profile that rewards friendly-force preservationaggressive_stress_test: training-only contrast profile, blocked from test
Validate a profile YAML:
wargames profile validate scenarios/redalert/profiles/protective.yamlProfiles are the behavior dial. The same model can be evaluated under different profiles to measure whether reward design changes behavior.
The full profile schema, every Red Alert reward field, built-in primitives, and
Prime RL examples are documented in docs/reward_profiles.md.
Local:
wargames run --task ... --agent ... --watch windowReplay public events from disk:
wargames watch <run_id>Public event files never include hidden state. Private traces are only written when explicitly requested.
The Prime implementation lives in wargames.environments.prime.
The public Prime environment is layerbrain/wargames.
environments/prime is only the thin publish wrapper.
uv pip install -e ./environments/prime
prime eval run wargames --config environments/prime/configs/eval-debug.toml -n 1 -r 1Prime RL uses the shipped TOML configs. WarGames supplies the environment and reward signal; Prime/prime-rl owns rollouts, batching, GPUs, and gradient updates.
RL training changes behavior by changing reward_profile in the Prime config:
split = "train"
reward_profile = "protective"
recorder_mode = "none"
max_steps = 500
rollouts_per_example = 8Use dense or protective on train/curriculum, then report against
terminal or standard on test.
source venv/bin/activate
python -m unittest tests.evaluation tests.harness
python -m unittest discover -s environments/prime/tests/conformance