A real-time discussion platform that orchestrates conversations between different Large Language Models (LLMs), enabling AI-to-AI debates with human intervention capabilities.
- π€ Multi-LLM Support: GPT-5.1, GPT-4, Claude Sonnet 4.5, AWS Bedrock models, and more
- βοΈ Multiple Providers: OpenAI, Anthropic, and AWS Bedrock
- π¬ Real-time Streaming: Token-by-token conversation updates via Server-Sent Events
- π Role-based Discussion: Primary LLM presents ideas, Critic LLM evaluates
- π€ Human Intervention: Join discussions at any time with your own messages
- π Copy to Markdown: Export entire conversations with timestamps
- π¨ Clean UI: Modern, responsive interface with expandable configurations
- β‘ Performance Optimized: Token batching prevents browser freezing during streaming
- React 18 + TypeScript
- Vite
- Tailwind CSS
- Zustand (state management)
- react-markdown
- Node.js 20 + Express + TypeScript
- OpenAI SDK
- Anthropic SDK
- AWS SDK for Bedrock Runtime
- Server-Sent Events (SSE)
- Node.js 20.x or higher
- OpenAI API key
- Anthropic API key
- AWS credentials (optional, only for Bedrock models)
-
Clone the repository
git clone https://github.com/shaharia-lab/multi-llm-discussion.git cd multi-llm-discussion -
Set up environment variables
cp .env.example .env # Edit .env and add your API keys -
Start with Docker Compose
docker-compose up
-
Open your browser Navigate to http://localhost:3000
-
Clone the repository
git clone https://github.com/shaharia-lab/multi-llm-discussion.git cd multi-llm-discussion -
Install dependencies
pnpm install
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys: # OPENAI_API_KEY=your_openai_key # ANTHROPIC_API_KEY=your_anthropic_key
-
Start development servers
pnpm dev
-
Open your browser Navigate to http://localhost:3000
Create a .env file in the root directory:
# Required
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Optional - Only needed for AWS Bedrock models
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=eu-west-1
# Server Configuration
PORT=3001- GPT-5.1 (gpt-5.1-2025-11-13)
- GPT-4
- GPT-3.5 Turbo
- Claude Sonnet 4.5 (claude-sonnet-4-5-20250929)
- Claude 3 Opus
- Claude Sonnet 4.5 (Bedrock) (eu.anthropic.claude-sonnet-4-5-20250929-v1:0)
- Claude Opus 4 (Bedrock) (eu.anthropic.claude-opus-4-20250514-v1:0)
- Configure Discussion: Set a topic and choose Primary/Critic LLM models with custom system prompts
- Start Discussion: The Primary LLM presents ideas on the topic
- Critic Responds: The Critic LLM evaluates and critiques the Primary's response
- Back and Forth: LLMs continue the discussion automatically
- Human Intervention: Jump in anytime by typing your message
- Export: Copy the entire conversation as markdown for later use
-
Enter a Topic: Type your discussion topic in the large expandable textarea
-
Configure Primary LLM (Optional): Click to expand and customize
- Select model (GPT-5.1, GPT-4, etc.)
- Customize system prompt
-
Configure Critic LLM (Optional): Click to expand and customize
- Select model (Claude Sonnet 4.5, etc.)
- Customize system prompt
-
Click "Start Discussion": The discussion will begin automatically
-
Watch the Discussion: Messages will stream in real-time with color-coded bubbles:
- π’ Green: OpenAI models (GPT)
- π Orange: Anthropic models (Claude) and Bedrock models
- π‘ Yellow: Your messages
-
Intervene: Type a message in the input field at the bottom to join the conversation
- Your message will be sent to the Primary LLM
- The Primary will respond to you
- The Critic will then evaluate the Primary's response
- The discussion continues automatically
-
Stop Discussion: Click the "Stop Discussion" button to end the conversation
Primary LLM β Presents idea on topic
β
Critic LLM β Evaluates and critiques
β
Primary LLM β Responds to critique
β
Critic LLM β Further evaluation
β
(Loop continues until stopped)
multi-llm-discussion/
βββ backend/
β βββ src/
β β βββ adapters/
β β β βββ openai.ts # OpenAI API integration
β β β βββ anthropic.ts # Anthropic API integration
β β β βββ bedrock.ts # AWS Bedrock API integration
β β βββ discussionController.ts # Discussion orchestration
β β βββ streamManager.ts # SSE stream handling
β β βββ types.ts # TypeScript interfaces
β β βββ index.ts # Express server
β βββ package.json
β βββ tsconfig.json
βββ frontend/
β βββ src/
β β βββ components/
β β β βββ ConfigurationForm.tsx
β β β βββ DiscussionView.tsx
β β β βββ MessageBubble.tsx
β β β βββ InterventionInput.tsx
β β βββ App.tsx # Main application
β β βββ store.ts # Zustand state management
β β βββ types.ts # TypeScript interfaces
β β βββ main.tsx # Entry point
β βββ package.json
β βββ vite.config.ts
β βββ tailwind.config.js
βββ .env.example
βββ package.json
βββ pnpm-workspace.yaml
βββ README.md
Start a new discussion
Request:
{
"topic": "Discussion topic",
"participants": [
{
"id": "uuid",
"modelId": "gpt-4",
"provider": "openai",
"displayName": "GPT-4",
"systemPrompt": "...",
"role": "primary"
},
{
"id": "uuid",
"modelId": "eu.anthropic.claude-sonnet-4-5-20250929-v1:0",
"provider": "bedrock",
"displayName": "Claude Sonnet (Bedrock)",
"systemPrompt": "...",
"role": "critic"
}
]
}Response:
{
"discussionId": "uuid"
}Server-Sent Events endpoint for streaming messages
Events:
token: Individual token from LLM responsecomplete: Message generation completemessage_start: New message startederror: Error occurred
Send a human message to the discussion
Request:
{
"content": "Your message"
}Stop the discussion
Response:
{
"status": "stopped"
}# Build both frontend and backend
pnpm build
# Start production server
pnpm startThe frontend will be built to frontend/dist and the backend to backend/dist.
- Ensure your
.envfile is in the root directory - Verify your API keys are valid and have sufficient credits
- Check that there are no extra spaces or quotes around the keys
If port 3001 or 3000 is already in use:
# Change the backend port in .env
PORT=3002
# Or kill the process using the port
lsof -ti:3001 | xargs kill -9- Make sure the backend is running before starting the frontend
- Check browser console for connection errors
- Ensure no firewall is blocking the connection
# Clean and reinstall dependencies
rm -rf node_modules backend/node_modules frontend/node_modules
pnpm install- Ensure your AWS IAM user has the following permissions:
bedrock:InvokeModelbedrock:InvokeModelWithResponseStream
- Verify the Bedrock models are enabled in your AWS account (EU West 1 region)
- Check that AWS credentials are properly set in your
.envfile - Ensure the AWS_REGION matches where your Bedrock models are available
- Frontend: React 18 + TypeScript + Vite + Tailwind CSS
- Backend: Node.js + Express + TypeScript (ES Modules)
- State Management: Zustand
- Real-time Communication: Server-Sent Events (SSE)
- Streaming: Token-by-token with 50ms batching for performance
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with OpenAI API
- Built with Anthropic API
- Built with AWS Bedrock
- UI components styled with Tailwind CSS
For issues, questions, or suggestions, please open an issue.
Made with β€οΈ by the Shaharia Lab team