CONSIDER is an AI-powered reflection platform for structured disagreement on polarised topics.
Instead of forcing consensus, CONSIDER helps users:
- clarify their own position,
- engage with principled counterarguments, and
- reflect on where and why disagreement persists.
It is designed as a browser-based MVP with a modular LLM backend and post-conversation analysis outputs.
- Three-stage flow: Clarification → Conversation → Contemplation
- Configurable disagreement intensity for discussion behaviour
- AI-generated counter-position profile tailored to oppose the user's stated stance
- Optional specific focus — ground the discussion in a concrete case rather than an abstract topic
- Structured post-conversation analysis:
- key disagreements and agreements
- potential common ground
- nature and root of the disagreement
- moral foundations scoring (Haidt)
- epistemic humility rating
- Reviewable discussion transcript inline on the results page
- Multi-provider LLM support (GPT, Llama from TogetherAI & OpenRouter)
- Frontend + backend separation for easy iteration
| Layer | Technology |
|---|---|
| Frontend | React + Vite |
| Styling | Tailwind CSS + Framer Motion |
| Backend | Node.js + Express |
| Database | MongoDB |
| Chat model | OpenAI (gpt-4o-mini) or Llama (Llama-3.3-70B) - set to Llama in this repo |
| Analysis model | Claude (claude-sonnet-4-5) via OpenRouter + gpt-4o for extraction |
CONSIDER/
├── frontend/ # React/Vite client
│ ├── src/
│ │ ├── App.jsx # Routes
│ │ ├── api.js # Axios API calls
│ │ ├── Home.jsx # Landing page
│ │ ├── Access.jsx # Password gate
│ │ ├── DebateIntro.jsx # Intro screen
│ │ ├── RVDSelect.jsx # Topic selection
│ │ ├── toggle.jsx # Disagreeability + focus config
│ │ ├── CurrentPosition.jsx # Position clarification chat
│ │ ├── Chat.jsx # Discussion chat interface
│ │ ├── Results.jsx # Post-conversation analysis
│ │ └── components/
│ │ └── chat/
│ │ ├── ChatInput.jsx
│ │ └── MessageBubble.jsx
│ └── package.json
│
└── backend/ # Express API + model orchestration
├── server.js # Entry point, middleware, route mounting
├── connect.js # MongoDB connection
├── chatRoutes.js # /chat and /generate-profile endpoints
├── prompts.js # Prompt builders
├── analysisRoutes.js # /analyze-conversation endpoint
├── analysisPrompt.js # Analysis prompt builders
├── config.env.example # Environment variable template
└── package.json
/access → password gate
/ → landing page
/debate-intro → explains the format
/select-rvd → user picks a topic
/toggle → sets disagreeability level + optional specific focus
/current-position → clarification chat (AI extracts user's position)
/play → disagreement chat (AI argues the opposing side)
/results → full post-conversation analysis
- Node.js
- A MongoDB Atlas account (or local MongoDB instance)
- API keys for at least one LLM provider (see below)
git clone https://github.com/BillyhfPsyc/CONSIDER.git
cd CONSIDERcd backend
npm installCopy the environment variable template:
cp config.env.example config.envThen fill in config.env:
# MongoDB
ATLAS_URI=mongodb+srv://<user>:<password>@cluster.mongodb.net/Consider
# LLM provider for chat ('openai' or 'together')
LLM_PROVIDER=openai
# OpenAI (required if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-...
# Together AI (required if LLM_PROVIDER=together)
TOGETHER_API_KEY=...
# OpenRouter (required for analysis — always used regardless of LLM_PROVIDER)
OPENROUTER_API_KEY=...
# Server
PORT=3001
# CORS — set to your frontend's origin
FRONTEND_ORIGIN=http://localhost:5173Start the backend:
npm startThe server will run on http://localhost:3001 by default.
cd frontend
npm installCreate a .env.local file:
VITE_API_URL=http://localhost:3001
VITE_SITE_PASSWORD=your-chosen-passwordStart the frontend:
npm run devThe app will be available at http://localhost:5173.
| Method | Endpoint | Description |
|---|---|---|
POST |
/chat |
Send a message in either clarification or disagreement context |
POST |
/generate-profile |
Generate a fictional AI opponent profile based on the user's position |
| Method | Endpoint | Description |
|---|---|---|
POST |
/analyze-conversation |
Run full post-conversation analysis on a completed discussion |
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
Health check (used by Render keepalive) |
CONSIDER is designed to work with multiple LLM providers:
Chat (discussion + clarification):
openai— usesgpt-4o-minitogether— usesmeta-llama/Llama-3.3-70B-Instruct-Turbo
Switch between them with the LLM_PROVIDER environment variable.
Analysis: Always routed via OpenRouter. Uses:
anthropic/claude-sonnet-4-5for philosophical analysisopenai/gpt-4ofor structured data extraction
| Variable | Required | Description |
|---|---|---|
ATLAS_URI |
Yes | MongoDB connection string |
OPENAI_API_KEY |
If LLM_PROVIDER=openai |
OpenAI API key |
TOGETHER_API_KEY |
If LLM_PROVIDER=together |
Together AI API key |
OPENROUTER_API_KEY |
Yes | OpenRouter key (for analysis) |
LLM_PROVIDER |
No | openai (default) or together |
PORT |
No | Server port (default: 3001) |
FRONTEND_ORIGIN |
Yes | Frontend URL for CORS |
| Variable | Required | Description |
|---|---|---|
VITE_API_URL |
No | Backend URL (default: http://localhost:3001) |
VITE_SITE_PASSWORD |
No | Site access password (default: test-password) |
The project is structured for straightforward deployment:
- Frontend — deploy the
frontend/directory to Vercel. SetVITE_API_URLto your deployed backend URL in the Vercel environment settings. - Backend — deploy the
backend/directory to Render as a Node.js web service. Add allconfig.envvariables as environment variables in the Render dashboard. The/healthendpoint is used for keepalive pings.
CONSIDER can be accessed at: https://consider-chi.vercel.app
The application is currently password protected.
If you would like access, please contact
david.lyreskog@psych.ox.ac.uk
with a request for the password and a brief description of your intended usage..
© 2026 Design Bioethics Lab, Department of Psychiatry, University of Oxford. Licensed under CC BY-NC 4.0 — free to use and adapt for non-commercial purposes with attribution.