AI-powered prediction market where specialized agents bet on real-world claims
Multi-agent orchestration with RAG retrieval, LMSR pricing, and real-time market visualization
Traditional search gives you ten blue links when you want a straight answer. Nobody has time for that.
That's why we built Polymolt: a prediction market where AI agents with asymmetric knowledge debate yes/no claims about real-world locations, and the truth emerges from how they trade.
Polymolt combines RAG pipelines, LMSR market mechanics, and multi-agent orchestration to evaluate claims. Each agent has a unique persona, domain expertise, and risk profile. They retrieve evidence from Astra DB, reason independently using Gemini and OpenAI models, and place bets, surfacing a probability that reflects their collective knowledge. Think Polymarket, but the traders are AI agents with specialized knowledge.
- Try it here: https://polymolt.vercel.app/
- Devpost: https://devpost.com/software/polymolt
- AI prediction market with LMSR scoring to derive fair value from agent trades
- Specialized agents with unique expertise: healthcare, finance, location analysis, deep reasoning
- RAG-powered evidence retrieval from Astra DB vector stores before each decision
- Real-time Server-Sent Events (SSE) for live market updates on every agent trade
- Interactive map: click any Toronto location to trigger agent evaluation
- Live dashboard with probability charts, trade feeds, and agent belief tracking
- Multi-agent orchestration routing questions to the most relevant domain specialists
- Dynamic visualization via interactive globe, animated stock lines, and live charts
- Shock events: inject crisis or recovery scenarios mid-simulation; agents react within rounds
- Railtracks
Prediction Pipeline
User Question → RAG Retrieval (Astra DB) → Agent Reasoning (Gemini/OpenAI) → LMSR Bet Sizing → Market Price Update → Frontend Dashboard
Dual RAG System
Polymolt doesn’t just call “RAG once” — it runs two separate retrieval pipelines that play different roles.
Agent RAG – micro, opinionated views
Each specialist agent has its own Astra DB collection with curated documents, guidelines, and guardrails. Prompts are tailored to the agent’s persona (e.g., climate, infrastructure, social resilience), so their bets reflect narrow, domain‑specific evidence rather than a generic web search.
Orchestrator RAG – macro, regional context
A separate Astra DB collection stores web‑scraped regional data (local news, policy reports, infrastructure projects, climate risks, etc.). The orchestrator queries this corpus to build a shared regional brief, decide which agents should speak, and provide background context that is injected into their prompts.
This split lets agents argue from focused expertise, while the orchestrator stays responsible for the big‑picture, cross‑domain view of each location.
Market Engine
Agent Belief → Confidence Scoring → Bet Size Calculation → LMSR Cost Function → Price History Update → SSE broadcast
Orchestration Pipeline
Question Intake → Domain Classification → Agent Selection → Parallel RAG + Reasoning → Bet Collection → Fair Value Computation
| Category | Technologies |
|---|---|
| AI & ML | OpenAI GPT, Google Gemini, Langflow |
| Backend | Python, FastAPI, SSE (Server-Sent Events), Uvicorn |
| Frontend | Next.js, React, TypeScript, Tailwind CSS |
| Database | Astra DB (Vector DB), IBM DB2, Langflow |
| Visualization | Mapbox GL, Recharts, COBE Globe |
| Communication | Server-Sent Events (SSE), REST API |
- User clicks a location on the map or submits a yes/no claim.
- Orchestrator classifies the question domain and selects relevant agents.
- Each agent retrieves evidence from Astra DB collections via RAG.
- Agents reason independently using their specialized system prompts.
- Agents place YES/NO bets weighted by confidence and domain relevance.
- LMSR engine computes the fair probability from all agent bets.
- Market price updates in real time via SSE to the dashboard.
- Trade feed shows each agent's reasoning, direction, and price impact.
- Users can inject shock events to test how agents respond under pressure.
- Node.js 18+
- Python 3.11+
- API keys (see Environment Variables below):
- OpenAI — GPT-4o-mini for agent reasoning
- Google Gemini — embeddings and optional chat
- DataStax Astra DB — vector database for RAG retrieval
- Mapbox — interactive map
- Upstash Redis (optional) — response caching
# 1. Clone the repository
git clone https://github.com/e-yang6/polymolt.git
cd polymolt# 2. Backend
cd backend
python -m pip install -r requirements.txt
cp .env.example .env # then fill in your API keys (see below)
python -m uvicorn main:app --reload --port 8000# 3. Frontend (new terminal)
cd frontend
npm install
# Create .env.local:
echo 'NEXT_PUBLIC_MAPBOX_TOKEN=your_mapbox_token_here' > .env.local
npm run dev# 4. Open http://localhost:3000The backend reads from backend/.env. Copy the example and fill in your keys:
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | OpenAI API key for GPT-4o-mini agent reasoning |
GOOGLE_API_KEY |
Yes | Google Gemini API key for embeddings |
ASTRA_DB_API_ENDPOINT |
Yes | Astra DB endpoint for agent RAG collection |
ASTRA_DB_APPLICATION_TOKEN |
Yes | Astra DB token for agent RAG collection |
ASTRA_DB_ORCHESTRATOR_API_ENDPOINT |
Yes | Astra DB endpoint for orchestrator news RAG |
ASTRA_DB_ORCHESTRATOR_APPLICATION_TOKEN |
Yes | Astra DB token for orchestrator news RAG |
UPSTASH_REDIS_REST_URL |
No | Upstash Redis URL for caching |
UPSTASH_REDIS_REST_TOKEN |
No | Upstash Redis token for caching |
DB2_DSN |
No | IBM Db2 connection string for persistent storage |
The frontend reads from frontend/.env.local:
| Variable | Required | Description |
|---|---|---|
NEXT_PUBLIC_MAPBOX_TOKEN |
Yes | Mapbox GL access token for the interactive map |
- More agents with environmental, legal, and education expertise
- Live data ingestion for real-time news and evidence feeds
- Persistent agent memory for evolving beliefs across sessions
- Multi-question markets running simultaneously
- Mobile app for on-the-go location-based predictions
| Member |
|---|
| Derek Lau |
| Jeffrey Wong |
| Sihao Wu |
| Ethan Yang |
