AI Debate Simulator - A real-time adversarial debate system where two Large Language Models argue opposing positions on any topic, scored by an impartial AI judge.

| Feed |
Settings |
 |
 |
MrArgue pits two locally-hosted LLMs against each other in structured debates. Each model is constrained by strict rules that force adversarial reasoning: explicit claims, direct rebuttals, and pointed counter-questions. An AI judge scores each turn based on claim quality, directness, and attack strength.
- Adversarial by design: Prompts enforce falsifiable claims and penalize vague abstractions
- RAG-enhanced: Retrieval-augmented generation provides factual context
- Dual scoring modes: AI judge or algorithmic heuristics
- Entertainment value: AI models roast each other with witty comebacks

| Category |
Feature |
Description |
| Debates |
Multi-Model Debates |
Llama 3.2 (PRO) vs Gemma 2:2b (CON) |
|
Streaming Responses |
Server-Sent Events for real-time generation |
|
RAG Context |
Embedding-based fact retrieval |
|
Debate History |
Persistent storage with replay |
| Scoring |
AI Judge |
LLM-based analysis with detailed metrics |
|
Algorithmic Judge |
Heuristic scoring for instant results |
|
Power Bar |
Visual cumulative score display |
| Auth |
JWT Authentication |
Stateless token-based auth |
|
User Registration |
Email/password with Argon2 hashing |
|
Profile Management |
Avatar selection, nickname/age |
|
Persistent Auth |
"Remember Me" session persistence (7 days) |
| Social |
Community Feed |
Posts with like functionality |
|
Debate Sharing |
Share results link generation |
| UI |
Dark Theme |
Black + neon green aesthetic |
|
How It Works |
Interactive explainer with LaTeX formulas |
| Priority |
Feature |
Status |
| High |
User vs AI Mode |
Planned |
| High |
Topic Browser |
Planned |
| Medium |
Social Sharing |
Planned |
| Medium |
Debate Rematch |
Planned |
| Medium |
Leaderboard |
Planned |
| Low |
Voice Input |
Planned |
| Low |
Multi-language |
Planned |
| Component |
Technology |
Version |
Purpose |
| Runtime |
Node.js |
20+ |
JavaScript runtime |
| Framework |
NestJS |
11.x |
API framework |
| ORM |
Prisma |
5.x |
Database access |
| Database |
SQLite |
3.x |
Data persistence |
| Auth |
Passport + JWT |
- |
Authentication |
| Caching |
cache-manager |
7.x |
Response caching |
| NLP |
natural |
8.x |
Algorithmic scoring |
| Component |
Technology |
Version |
Purpose |
| Framework |
Flutter |
3.10+ |
Cross-platform UI |
| HTTP |
http |
1.6.x |
API communication |
| Storage |
flutter_secure_storage |
10.x |
Token storage |
| Math |
flutter_math_fork |
0.7.x |
LaTeX rendering |
| Navigation |
google_nav_bar |
5.x |
Bottom navigation |
| Component |
Technology |
Purpose |
| LLM Runtime |
Ollama |
Local model hosting |
| Proponent Model |
Llama 3.2 |
Fast, consistent reasoning |
| Opponent Model |
Gemma 2:2b |
Creative, divergent arguments |
| Embedding Model |
nomic-embed-text |
Vector embeddings for RAG |
The AI judge evaluates each turn on three dimensions with vagueness penalties:
Persuasiveness = (C_claim + C_answer + C_attack) / 3 - V_penalty
| Metric |
Range |
Criteria |
C_claim |
0-100 |
Falsifiable claim with evidence |
C_answer |
0-100 |
Direct response to opponent's question |
C_attack |
0-100 |
Specific flaw identification in opponent's argument |
V_penalty |
-30 each |
Penalty for undefined abstractions |
Instant heuristic-based scoring:
Score = w1 * L_arg + w2 * K_rel + w3 * C_struct
| Factor |
Weight |
Description |
L_arg |
0.3 |
Argument length (optimal: 50-150 words) |
K_rel |
0.4 |
Keyword relevance to topic |
C_struct |
0.3 |
Structural compliance (headers present) |
Cumulative persuasiveness ratio:
R_pro = SUM(P_pro) / (SUM(P_pro) + SUM(P_con)) * 100%
Cosine similarity for context retrieval:
similarity(q, d) = (q . d) / (||q|| * ||d||)
| Column |
Type |
Constraints |
Description |
id |
UUID |
PRIMARY KEY |
Unique identifier |
email |
VARCHAR |
UNIQUE, NOT NULL |
Login email |
password |
VARCHAR |
NOT NULL |
Argon2 hash |
avatarUrl |
VARCHAR |
NULLABLE |
Profile image URL |
nickname |
VARCHAR |
NULLABLE |
Display name |
age |
INTEGER |
NULLABLE |
User age |
createdAt |
TIMESTAMP |
DEFAULT NOW |
Registration time |
| Column |
Type |
Constraints |
Description |
id |
UUID |
PRIMARY KEY |
Unique identifier |
topic |
VARCHAR |
NOT NULL |
Debate topic |
status |
ENUM |
NOT NULL |
ACTIVE, FINISHED |
userId |
UUID |
FOREIGN KEY |
Owner reference |
createdAt |
TIMESTAMP |
DEFAULT NOW |
Start time |
| Column |
Type |
Constraints |
Description |
id |
UUID |
PRIMARY KEY |
Unique identifier |
debateId |
UUID |
FOREIGN KEY |
Parent debate |
speaker |
ENUM |
NOT NULL |
MODEL_A, MODEL_B |
content |
TEXT |
NOT NULL |
Turn content |
modelName |
VARCHAR |
NOT NULL |
LLM model used |
analysis |
JSON |
NULLABLE |
Scoring data |
timestamp |
TIMESTAMP |
DEFAULT NOW |
Turn time |
| Method |
Endpoint |
Body |
Response |
POST |
/auth/register |
{email, password} |
{token, user} |
POST |
/auth/login |
{email, password} |
{token, user} |
GET |
/auth/verify |
- |
{user} |
POST |
/auth/onboarding |
{nickname, age} |
{user} |
PUT |
/auth/profile |
{avatarUrl} |
{user} |
| Method |
Endpoint |
Body |
Response |
GET |
/debate |
- |
[Debate] |
GET |
/debate/:id |
- |
Debate |
POST |
/debate/start |
{topic} |
Debate |
POST |
/debate/:id/turn |
{scoringMode} |
Debate |
GET |
/debate/:id/turn/stream |
- |
SSE Stream |
| Method |
Endpoint |
Body |
Response |
GET |
/feed |
- |
[Post] |
POST |
/feed |
{content, debateId?} |
Post |
POST |
/feed/:id/like |
- |
{liked} |
| Requirement |
Version |
Purpose |
| Node.js |
20+ |
Backend runtime |
| npm |
10+ |
Package manager |
| Flutter |
3.10+ |
Frontend framework |
| Ollama |
Latest |
LLM runtime |
git clone https://github.com/TrendySloth1001/argumentbot.git
cd argumentbot
cd backend
# Install dependencies
npm install
# Configure environment
cp .env.example .env
# Edit .env with your settings
# Initialize database
npx prisma generate
npx prisma migrate dev
# Start development server
npm run dev
cd frontend
# Install dependencies
flutter pub get
# Configure API URL
# Edit lib/core/config/api_config.dart
# Run application
flutter run
# Install Ollama from https://ollama.ai
# Pull required models
ollama pull llama3.2
ollama pull gemma2:2b
ollama pull nomic-embed-text
# Verify installation
ollama list
cd backend
docker build -t argumentbot-backend:latest .
docker run -d \
--name argumentbot-api \
-p 3000:3000 \
-e DATABASE_URL="file:./dev.db" \
-e JWT_SECRET="your-secret-key" \
-e OLLAMA_API_URL="http://host.docker.internal:11434" \
argumentbot-backend:latest
cd backend
docker-compose up -d
| Variable |
Required |
Default |
Description |
DATABASE_URL |
Yes |
- |
SQLite connection string |
JWT_SECRET |
Yes |
- |
JWT signing secret |
OLLAMA_API_URL |
No |
http://localhost:11434 |
Ollama API endpoint |

- Karaoke Sync: The text highlighting in Karaoke mode involves estimating word duration based on character count. This may occasionally drift slightly from the actual audio playback, especially with faster voice models.
- Emulator Audio: Audio playback latency can be higher on Android Emulators/iOS Simulators compared to physical devices.
- First-Time Load: The first synthesis request for a specific phrase may have slight latency as the backend initializes the stream; subsequent requests are cached.
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-feature
- Commit changes:
git commit -m 'Add new feature'
- Push to branch:
git push origin feature/new-feature
- Submit a Pull Request
This project is licensed under the MIT License. See LICENSE for details.