A system for simulating political debates and negotiations between AI agents modeled after historical figures with opposing ideologies.
Can political AI agents of historical figures who opposed each other reach a consensus in a simulated setting?
- Hitler vs Gandhi vs Jinnah: Exploring ideological conflicts and potential common ground
- US vs Japan: Could the atomic bomb have been prevented? How?
- Trump vs Mao: Trade and tariff negotiations
- Winston Churchill vs Karl Marx vs Niccolò Machiavelli: Different political ideologies
- Historical figure personality modeling
- Multi-agent debate system
- Consensus detection and analysis
- Real-time negotiation simulation
- Optional Ollama LLM integration (local)
- Minimal FastAPI web UI
├── agents/ # Historical figure AI agents
├── debates/ # Debate simulation system
├── frontend/ # FastAPI + HTML minimal frontend
├── utils/ # Utilities (Ollama client)
├── consensus/ # Consensus detection algorithms
├── data/ # Historical data and context
└── examples/ # Example scenarios and outputs
python3 main.py --agents hitler gandhi jinnah --topic territorial_disputes --rounds 10 --format json --summary-only- Install Ollama: see
https://ollama.com - Start server (macOS):
ollama serve - Pull a model (example):
ollama pull llama3.1:8b - Configure (optional):
export OLLAMA_BASE_URL=http://localhost:11434export OLLAMA_MODEL=llama3.1:8b
- In FastAPI UI, check "Use Ollama" and optionally set base URL/model.
- Vercel route:
POST /api/chat - Required env:
GEMINI_API_KEY - Optional env:
GEMINI_MODEL(defaultgemini-2.0-flash),GEMINI_BASE_URL - Optional tuning:
CHAT_CACHE_TTL_S(default 120 seconds)CHAT_CACHE_MAX_ITEMS(default 256)
- The simulator keeps a short-term memory window (default: 8 turns) plus a rolling summary.
- It tracks salient user preferences, open loops, and consensus metrics to keep replies coherent.
- Tuning knobs (see
utils/conversation_state.py):memory_window(default 8)compromise_threshold(default 0.7)weightsfor consensus scoring
uvicorn frontend.server:app --reloadThen open http://127.0.0.1:8000.
pip3 install -r requirements.txt
streamlit run web_app.pyMIT