A multi-agent epistemic reasoning system that researches both sides of any question using live web sources, structured belief graphs, and semantic contradiction detection.
Built with thinkn.ai beliefs SDK + Exa + GPT-4o. Uses thinkn.ai as the core framework to simulate a (bi-opinion) debate using multiple agents that put equal weight on advocating for and against the topic. By connecting agents to EXA API the real content like articles, research papers, blogs are pulled from the internet and fed to a belief system that semantically manages arguments, identifies contradictions and resolves knowledge gaps autonomously by pursuing the promising query directions, continuously evolving and expanding over time as more and more data is collected. The system tries to minimize confusion, maximize clarity and summarizes grounded findings at the very end.
Traditional LLM research accumulates text. This system accumulates understanding.
Two opposing agents (pro and anti) independently research a question via live web search. Every piece of content they ingest is parsed into typed, confidence-weighted belief nodes in a shared namespace. The SDK automatically:
- Detects semantic contradictions across sources that never reference each other
- Suppresses the clarity score when genuine epistemic conflict exists
- Tracks open gaps and ranks the highest-value next research actions
- Fuses multi-agent outputs into a single coherent world state
A third judge agent reads the fused namespace and produces a structured, evidence-grounded verdict via GPT-4o.
User Question
β
βΌ
generateDebateConfig() β GPT-4o bootstraps sides, goal, 4 gaps, seed queries
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββ
β Shared Namespace β
β β
β pro-agent βββΊ beliefs.after(webContent) β
β anti-agent βββΊ beliefs.after(webContent) β
β β β
β SDK fuses, scores, detects β
β contradictions automatically β
ββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
judge.read() β full fused belief graph
β
ββββΊ debateDirector() β GPT-4o reads world.moves[] β writes Exa queries
β β
β βΌ
β Exa web search β beliefs.after() β repeat N rounds
β
βΌ
judge.before() β structured briefing prompt injected into GPT-4o
β
βΌ
GPT-4o Verdict β grounded in belief graph, not raw web text
debate-ui/
βββ app/
β βββ page.tsx # Main UI β live SSE rendering
β βββ api/
β β βββ debate/route.ts # SSE stream β runs the debate loop
β β βββ verdict/route.ts # Judge verdict endpoint
βββ src/
β βββ lib/
β βββ debate-runner.ts # Core debate logic
βββ .env.local # API keys
GPT-4o takes the user's question and generates the debate configuration:
- Two named sides (pro / anti) with distinct research angles
- A single overarching goal node
- 4 investigable gap nodes (things the system doesn't know yet)
- Seed search queries for round 1
const proAgent = new Beliefs({ apiKey, agent: 'pro', namespace: ns })
const antiAgent = new Beliefs({ apiKey, agent: 'anti', namespace: ns })
const judge = new Beliefs({ apiKey, agent: 'judge', namespace: ns })
// All three agents write to and read from the same belief graph.
// The SDK fuses their outputs β no manual diffing needed.Each round:
judge.read()β snapshot the full world statedebateDirector()β GPT-4o readsworld.moves[](ranked by expected information gain) and writes Exa queries- Both agents run Exa searches in parallel
- Each result page is fed into
agent.after(webContent)β the SDK extracts beliefs, scores confidence, detects contradictions - Resolved gaps are closed with
beliefs.resolve(gap) - State is streamed to the UI via SSE
The runner exits when the SDK signals diminishing returns:
const shouldStop =
world.moves.length === 0 || // no further high-value actions
topMove.value < 0.1 || // expected information gain near zero
(world.gaps.length === 0 &&
world.contradictions.length >= CONTRADICTION_THRESHOLD)const context = await judge.before() // structured belief-graph briefing
// context.prompt is injected into GPT-4o as system prompt
// The LLM summarises the belief graph, not the raw web| Method | Purpose |
|---|---|
new Beliefs({ agent, namespace }) |
Three agents sharing one namespace |
beliefs.add([...], { type: 'gap' }) |
Seed 4 investigable unknowns |
beliefs.add({ type: 'goal' }) |
Set the debate objective |
beliefs.after(webContent) |
Extract + fuse beliefs from Exa page text |
beliefs.read() |
Full world state: beliefs, gaps, contradictions, moves |
beliefs.before() |
Structured system prompt for GPT-4o verdict |
beliefs.resolve(gap) |
Explicitly close a gap answered by evidence |
beliefs.snapshot() |
Lightweight state read for UI polling |
Clarity is not a quality score β it's epistemic readiness, computed across four channels:
A clarity of
0.41after 53 ingested sources is correct behavior β on a genuinely contested topic, thecoherencechannel stays suppressed. The system knows it doesn't know.
| Field | Description |
|---|---|
Total beliefs |
All claim nodes in the fused namespace |
Established >0.70 |
High-confidence, consistent beliefs |
Contested 0.40β0.70 |
Disputed or partially supported |
Weak <0.40 |
Low signal β single source or contradicted |
Contradictions |
Semantic conflicts detected across sources |
Open gaps |
Unknowns still unresolved |
Gaps resolved |
Gaps explicitly closed via beliefs.resolve() |
Judge clarity |
Overall epistemic readiness (0β1) |
Sources ingested |
Total Exa pages fed via after() |
- Director line (cyan italic) β GPT-4o's reasoning from
world.moves[] - Next-round value β SDK's expected information gain for the next round
- Source list β Exa pages fed into
beliefs.after() - β‘ Conflicts badge β contradictions detected in this round
- WHAT CONFLICTED β the specific belief pairs that semantically negate each other
- Clarity bars β per-agent clarity after round completion
- Resolved chip (green) β gap closed this round via
beliefs.resolve()
Verdict sections map directly to belief graph confidence tiers β the structure comes from the SDK, not prompt engineering:
| Verdict Section | Belief Tier |
|---|---|
| Evidence clearly supports | Established > 0.70 |
| Actively contested | Contested 0.40β0.70 |
| Genuinely unknown | Open gaps |
cd debate-ui
cp .env.local.example .env.local
npm install
npm run dev
# β http://localhost:3000.env.local
BELIEFS_KEY=bel_live_...
EXA_API_KEY=...
OPENAI_API_KEY=...| Package | Purpose |
|---|---|
beliefs |
Epistemic belief state SDK |
exa-js |
Neural web search for real-time evidence |
openai |
GPT-4o for director reasoning + verdict |
next |
Web UI + SSE streaming |
- thinkn.ai: thinkn.ai/profile/api-keys
- Exa: dashboard.exa.ai
- OpenAI: platform.openai.com/api-keys