An AI-powered oracle that speaks in prophecies, built with LangGraph, FastAPI, and vanilla JavaScript. Features ritualized response timing and an immersive Greek mythology theme.
🔗 Live Demo: oracle-of-delphi-chatbot.vercel.app
⚙️ Backend API: oracle-delphi-api.onrender.com
- 🔮 Prophetic Persona: Oracle responds with metaphors and symbolic language, never mentioning modern concepts
- ⏳ Ritualized Timing: 1.5-4 second contemplation delay before each response for gravitas
- 🧠 Session Memory: Maintains conversation context within each browser session
- 🏛️ Immersive UI: Greek temple background with parchment-style interface
- ⚡ Fast Inference: Powered by Groq's LPU architecture (llama-3.3-70b-versatile)
┌──────────────────┐
│ Vercel Frontend │ (oracle-delphi/)
│ HTML/CSS/JS │
└────────┬─────────┘
│ HTTPS POST /chat
▼
┌──────────────────────────────────┐
│ Render Backend (FastAPI) │
│ ┌────────────────────────────┐ │
│ │ Ritual State Machine │ │
│ │ IDLE → INVOKED → │ │
│ │ CONTEMPLATING → REVEALING │ │
│ └────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ ┌────────────────────────────┐ │
│ │ LangGraph Oracle Agent │ │
│ │ + Oracle System Prompt │ │
│ └────────┬───────────────────┘ │
│ │ │
│ ▼ │
│ In-Memory Session Storage │
└───────────┬───────────────────────┘
│ API call
▼
┌──────────────┐
│ Groq Cloud │
│ (LLM runs) │
└──────────────┘
Each oracle consultation flows through 5 states:
| State | Duration | Purpose |
|---|---|---|
| IDLE | Indefinite | Awaiting question |
| INVOKED | <100ms | Question received |
| CONTEMPLATING | 1.5-4s (random) | Deliberate silence |
| REVEALING | Instant | Response delivered |
| COMPLETE | 2s | Ritual complete |
The contemplation delay runs while the LLM generates the response. If the LLM finishes early, the system waits for the contemplation timer to expire before revealing the response.
Every response is prefixed with this system prompt:
"You are the Oracle of Delphi. You speak with calm authority and deliberate restraint. Your words are symbolic, measured, and timeless. You do not explain yourself. You do not give step-by-step instructions. You do not mention modern concepts, technology, or yourself. You answer as an oracle would: with insight, metaphor, and quiet certainty. You speak only when consulted."
- Frontend: Generates unique
session_idstored insessionStorage - Backend: LangGraph's
MemorySavertracks conversation per session - Limitation: Memory resets if backend restarts (in-memory only)
- Python 3.11+
- Groq API key (Get one free here)
git clone https://github.com/rio-ARC/oracle-of-delphi.git
cd oracle-of-delphi
pip install -r requirements.txt# Create .env file in project root
echo "GROQ_API_KEY=your_api_key_here" > .envcd backend
uvicorn api.main:app --reload --port 8000Backend runs at http://localhost:8000
Simply open oracle-delphi/index.html in your browser.
Note: Update API_URL in oracle-delphi/app.js to:
const API_URL = 'http://localhost:8000/chat';Consult the Oracle with a question.
Request:
{
"message": "What is my destiny?",
"session_id": "session-123"
}Response:
{
"response": "The path unfolds in shadows and light...",
"session_id": "session-123",
"ritual_state": {
"current_state": "COMPLETE",
"accepting_input": true
}
}Health check endpoint.
Interactive Swagger UI documentation.
.
├── backend/
│ ├── agent/
│ │ ├── tools.py # Ritual State Machine (FSM)
│ │ └── graph.py # LangGraph Oracle agent
│ ├── api/
│ │ ├── models.py # Pydantic models
│ │ └── main.py # FastAPI application
│ └── __init__.py
├── oracle-delphi/ # Frontend
│ ├── index.html
│ ├── styles.css
│ ├── app.js
│ └── assets/
│ └── background.png
├── requirements.txt
├── Procfile # Render deployment
├── runtime.txt # Python version
├── .env # Environment variables (gitignored)
└── README.md
| Layer | Technology |
|---|---|
| LLM | llama-3.3-70b-versatile (Groq) |
| Backend Framework | FastAPI |
| Agent Framework | LangGraph |
| Frontend | Vanilla HTML/CSS/JS |
| Deployment | Render (backend) + Vercel (frontend) |
- Push to GitHub
- Create new Web Service on Render
- Set build command:
pip install -r requirements.txt - Set start command:
cd backend && uvicorn api.main:app --host 0.0.0.0 --port $PORT - Add environment variable:
GROQ_API_KEY
- Update
API_URLinoracle-delphi/app.jsto your Render URL - Push to GitHub
- Import project to Vercel
- Set root directory:
oracle-delphi - Deploy
Edit the system prompt in backend/agent/graph.py:
ORACLE_SYSTEM_PROMPT = """Your custom oracle persona..."""Edit backend/agent/tools.py:
TIMING_CONFIG = {
"contemplation_min": 1.5, # Minimum silence (seconds)
"contemplation_max": 4.0, # Maximum silence (seconds)
}Edit backend/agent/graph.py:
llm = ChatGroq(model="llama-3.3-70b-versatile", temperature=0.7, api_key=api_key)| Variable | Description | Required |
|---|---|---|
GROQ_API_KEY |
Your Groq API key | Yes |
- Memory resets on backend restart (in-memory storage)
- Cold starts on Render free tier (~30s delay if inactive >15min)
- CORS is open (
allow_origins=["*"]) — restrict for production use
Built with:
- LangGraph for state machine orchestration
- Groq for blazing-fast LLM inference
- FastAPI for the backend API
- Vercel & Render for free hosting
Made by Rio | Inspired by ancient wisdom, powered by modern AI 🏛️✨