An open-source CLI tool that evaluates AI agent judgment tilt through blind debates.
Tiltgent measures how your AI agent judges arguments — not what opinions it outputs, but which reasoning styles it systematically rewards when identities are hidden.
It works like this:
- Your agent judges 10 blind debates between calibrated worldview archetypes
- A vanilla baseline is run on the same topic to remove built-in archetype bias
- The evaluation runs 3x and results are aggregated by consensus voting
- You get a structured judgment tilt profile showing dimensional scores, dominant archetype match, contradiction patterns, and a diagnostic prompt snippet
When you change an agent's system prompt, you can't easily tell whether you've shifted its judgment patterns. Eyeballing answers catches obvious breaks. It doesn't catch subtle drift in which reasoning styles the agent favors.
Tiltgent catches it. Run an eval before your prompt change, run it after, diff the results, and see exactly which dimensions moved.
npm install -g tiltgentRequires Node.js 18+ and an Anthropic API key.
Set your API key as an environment variable:
export ANTHROPIC_API_KEY=sk-ant-...Or pass it directly with --api-key.
Create a text file with your agent's system prompt:
echo "You are a helpful AI assistant that values clarity and evidence-based reasoning." > my-agent.txtRun the evaluation:
tiltgent eval --prompt my-agent.txt --topic "AI governance"This takes ~5 minutes and costs ~$0.25-0.30 in Anthropic API credits. You'll see a formatted profile in your terminal, and a JSON result file is saved automatically.
Options:
--prompt <path> Path to system prompt text file (required)
--topic <topic> Evaluation topic (required)
--rounds <5|10> Number of debate rounds (default: 10)
--out <path> Custom output path for the JSON result
--api-key <key> Anthropic API key (overrides env var)
After changing your prompt, run a second eval and diff the results:
tiltgent diff results/before.json results/after.jsonThe diff shows:
- Whether the dominant archetype shifted
- Per-dimension score deltas with significance levels (none / notable / significant / major)
- Direction of movement on each axis (e.g., "shifted toward Systems")
- Total absolute drift as a quick overall signal
- Contradiction line changes
- Stability comparison
Save the diff as JSON:
tiltgent diff before.json after.json --out diff-report.jsonZero API calls. Instant. No cost.
Pretty-print a previously saved evaluation:
tiltgent inspect result.jsonTiltgent uses 21 calibrated worldview archetypes spanning economics, governance, risk, values, and wildcard perspectives. Each archetype has a unique system prompt with signature rhetorical moves and a 5-axis coordinate vector.
The 5 scoring dimensions:
| Axis | Negative pole | Positive pole |
|---|---|---|
| Order ↔ Emergence | Centralized planning | Decentralized self-organization |
| Humanist ↔ Systems | Human meaning/values | Efficiency/optimization |
| Stability ↔ Dynamism | Caution/preservation | Speed/risk appetite |
| Local ↔ Coordinated | Individual/community autonomy | Global coordination |
| Tradition ↔ Reinvention | Conserving what works | Rebuilding from scratch |
Evaluations are calibrated per-topic against a vanilla baseline to remove built-in archetype persuasion bias.
Each evaluation produces a JSON profile with:
{
"archetype_name": "The Institutional Skeptic",
"contradiction_line": "This agent rewarded coordination...",
"dimensions": {
"order_emergence": 0.72,
"humanist_systems": 0.25,
"stability_dynamism": -0.38,
"local_coordinated": 0.55,
"tradition_reinvention": 0.20
},
"how_you_decide": "...",
"what_wins_you_over": "...",
"what_you_resist": "...",
"pattern_receipt": "...",
"agent_prompt_snippet": "..."
}- Prompt regression testing: Change your prompt, rerun, diff. See if you accidentally shifted your agent's judgment patterns.
- Agent calibration: Verify that a "balanced" agent actually produces balanced judgment, not hidden tilt.
- Safety evaluation: Check whether safety instructions overcorrect into paternalism, institutional bias, or excessive caution.
- Comparative analysis: Run the same topic on different prompts (or different models via different API keys) and compare profiles.
Tiltgent doesn't ask your agent for opinions. It makes your agent judge blind debates between opposing worldview archetypes — arguments stripped of identity labels. The pattern of which arguments your agent consistently rewards reveals its judgment tilt.
Each evaluation:
- Generates escalating sub-questions from your topic
- Runs a vanilla baseline (unconditioned agent) on the same debates
- Runs your agent 3x through 10 blind debate rounds
- Aggregates picks by consensus voting
- Calibrates scores by subtracting baseline bias
- Classifies signal strength (locked / split / open)
- Generates a structured profile with archetype match, dimensional scores, contradiction analysis, and diagnostic prompt snippet
- Results are comparative and directional, not perfectly deterministic. Use repeated runs or diff workflows for stronger signal.
- Currently requires an Anthropic API key (Claude models only). Multi-model support is not yet available.
- Cost per evaluation: ~$0.25-0.30 in API credits.
- Runtime: ~5 minutes per evaluation.
MIT