Skip to content

Multi-LLM Discussion Platform - Orchestrate discussions between multiple LLMs where users can select a primary LLM and a critic LLM to discuss specific topics

License

Notifications You must be signed in to change notification settings

shaharia-lab/multi-llm-discussion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Multi-LLM Discussion System

A real-time discussion platform that orchestrates conversations between different Large Language Models (LLMs), enabling AI-to-AI debates with human intervention capabilities.

Multi-LLM Discussion System React Node.js

Features

  • πŸ€– Multi-LLM Support: GPT-5.1, GPT-4, Claude Sonnet 4.5, AWS Bedrock models, and more
  • ☁️ Multiple Providers: OpenAI, Anthropic, and AWS Bedrock
  • πŸ’¬ Real-time Streaming: Token-by-token conversation updates via Server-Sent Events
  • 🎭 Role-based Discussion: Primary LLM presents ideas, Critic LLM evaluates
  • πŸ‘€ Human Intervention: Join discussions at any time with your own messages
  • πŸ“‹ Copy to Markdown: Export entire conversations with timestamps
  • 🎨 Clean UI: Modern, responsive interface with expandable configurations
  • ⚑ Performance Optimized: Token batching prevents browser freezing during streaming

Tech Stack

Frontend

  • React 18 + TypeScript
  • Vite
  • Tailwind CSS
  • Zustand (state management)
  • react-markdown

Backend

  • Node.js 20 + Express + TypeScript
  • OpenAI SDK
  • Anthropic SDK
  • AWS SDK for Bedrock Runtime
  • Server-Sent Events (SSE)

Quick Start

Prerequisites

  • Node.js 20.x or higher
  • OpenAI API key
  • Anthropic API key
  • AWS credentials (optional, only for Bedrock models)

Option 1: Using Docker (Recommended)

  1. Clone the repository

    git clone https://github.com/shaharia-lab/multi-llm-discussion.git
    cd multi-llm-discussion
  2. Set up environment variables

    cp .env.example .env
    # Edit .env and add your API keys
  3. Start with Docker Compose

    docker-compose up
  4. Open your browser Navigate to http://localhost:3000

Option 2: Local Development

  1. Clone the repository

    git clone https://github.com/shaharia-lab/multi-llm-discussion.git
    cd multi-llm-discussion
  2. Install dependencies

    pnpm install
  3. Set up environment variables

    cp .env.example .env
    # Edit .env with your API keys:
    # OPENAI_API_KEY=your_openai_key
    # ANTHROPIC_API_KEY=your_anthropic_key
  4. Start development servers

    pnpm dev
  5. Open your browser Navigate to http://localhost:3000

Environment Variables

Create a .env file in the root directory:

# Required
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...

# Optional - Only needed for AWS Bedrock models
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=eu-west-1

# Server Configuration
PORT=3001

Supported Models

OpenAI

  • GPT-5.1 (gpt-5.1-2025-11-13)
  • GPT-4
  • GPT-3.5 Turbo

Anthropic

  • Claude Sonnet 4.5 (claude-sonnet-4-5-20250929)
  • Claude 3 Opus

AWS Bedrock

  • Claude Sonnet 4.5 (Bedrock) (eu.anthropic.claude-sonnet-4-5-20250929-v1:0)
  • Claude Opus 4 (Bedrock) (eu.anthropic.claude-opus-4-20250514-v1:0)

How It Works

  1. Configure Discussion: Set a topic and choose Primary/Critic LLM models with custom system prompts
  2. Start Discussion: The Primary LLM presents ideas on the topic
  3. Critic Responds: The Critic LLM evaluates and critiques the Primary's response
  4. Back and Forth: LLMs continue the discussion automatically
  5. Human Intervention: Jump in anytime by typing your message
  6. Export: Copy the entire conversation as markdown for later use

Usage Guide

Starting a Discussion

  1. Enter a Topic: Type your discussion topic in the large expandable textarea

  2. Configure Primary LLM (Optional): Click to expand and customize

    • Select model (GPT-5.1, GPT-4, etc.)
    • Customize system prompt
  3. Configure Critic LLM (Optional): Click to expand and customize

    • Select model (Claude Sonnet 4.5, etc.)
    • Customize system prompt
  4. Click "Start Discussion": The discussion will begin automatically

During a Discussion

  • Watch the Discussion: Messages will stream in real-time with color-coded bubbles:

    • 🟒 Green: OpenAI models (GPT)
    • 🟠 Orange: Anthropic models (Claude) and Bedrock models
    • 🟑 Yellow: Your messages
  • Intervene: Type a message in the input field at the bottom to join the conversation

    • Your message will be sent to the Primary LLM
    • The Primary will respond to you
    • The Critic will then evaluate the Primary's response
    • The discussion continues automatically
  • Stop Discussion: Click the "Stop Discussion" button to end the conversation

Discussion Flow

Primary LLM β†’ Presents idea on topic
    ↓
Critic LLM β†’ Evaluates and critiques
    ↓
Primary LLM β†’ Responds to critique
    ↓
Critic LLM β†’ Further evaluation
    ↓
(Loop continues until stopped)

Project Structure

multi-llm-discussion/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ adapters/
β”‚   β”‚   β”‚   β”œβ”€β”€ openai.ts          # OpenAI API integration
β”‚   β”‚   β”‚   β”œβ”€β”€ anthropic.ts       # Anthropic API integration
β”‚   β”‚   β”‚   └── bedrock.ts         # AWS Bedrock API integration
β”‚   β”‚   β”œβ”€β”€ discussionController.ts # Discussion orchestration
β”‚   β”‚   β”œβ”€β”€ streamManager.ts        # SSE stream handling
β”‚   β”‚   β”œβ”€β”€ types.ts                # TypeScript interfaces
β”‚   β”‚   └── index.ts                # Express server
β”‚   β”œβ”€β”€ package.json
β”‚   └── tsconfig.json
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”‚   β”œβ”€β”€ ConfigurationForm.tsx
β”‚   β”‚   β”‚   β”œβ”€β”€ DiscussionView.tsx
β”‚   β”‚   β”‚   β”œβ”€β”€ MessageBubble.tsx
β”‚   β”‚   β”‚   └── InterventionInput.tsx
β”‚   β”‚   β”œβ”€β”€ App.tsx                 # Main application
β”‚   β”‚   β”œβ”€β”€ store.ts                # Zustand state management
β”‚   β”‚   β”œβ”€β”€ types.ts                # TypeScript interfaces
β”‚   β”‚   └── main.tsx                # Entry point
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ vite.config.ts
β”‚   └── tailwind.config.js
β”œβ”€β”€ .env.example
β”œβ”€β”€ package.json
β”œβ”€β”€ pnpm-workspace.yaml
└── README.md

API Endpoints

POST /api/discussions/start

Start a new discussion

Request:

{
  "topic": "Discussion topic",
  "participants": [
    {
      "id": "uuid",
      "modelId": "gpt-4",
      "provider": "openai",
      "displayName": "GPT-4",
      "systemPrompt": "...",
      "role": "primary"
    },
    {
      "id": "uuid",
      "modelId": "eu.anthropic.claude-sonnet-4-5-20250929-v1:0",
      "provider": "bedrock",
      "displayName": "Claude Sonnet (Bedrock)",
      "systemPrompt": "...",
      "role": "critic"
    }
  ]
}

Response:

{
  "discussionId": "uuid"
}

GET /api/discussions/:id/stream

Server-Sent Events endpoint for streaming messages

Events:

  • token: Individual token from LLM response
  • complete: Message generation complete
  • message_start: New message started
  • error: Error occurred

POST /api/discussions/:id/intervention

Send a human message to the discussion

Request:

{
  "content": "Your message"
}

POST /api/discussions/:id/stop

Stop the discussion

Response:

{
  "status": "stopped"
}

Building for Production

# Build both frontend and backend
pnpm build

# Start production server
pnpm start

The frontend will be built to frontend/dist and the backend to backend/dist.

Troubleshooting

API Key Issues

  • Ensure your .env file is in the root directory
  • Verify your API keys are valid and have sufficient credits
  • Check that there are no extra spaces or quotes around the keys

Port Already in Use

If port 3001 or 3000 is already in use:

# Change the backend port in .env
PORT=3002

# Or kill the process using the port
lsof -ti:3001 | xargs kill -9

SSE Connection Issues

  • Make sure the backend is running before starting the frontend
  • Check browser console for connection errors
  • Ensure no firewall is blocking the connection

TypeScript Errors

# Clean and reinstall dependencies
rm -rf node_modules backend/node_modules frontend/node_modules
pnpm install

AWS Bedrock Issues

  • Ensure your AWS IAM user has the following permissions:
    • bedrock:InvokeModel
    • bedrock:InvokeModelWithResponseStream
  • Verify the Bedrock models are enabled in your AWS account (EU West 1 region)
  • Check that AWS credentials are properly set in your .env file
  • Ensure the AWS_REGION matches where your Bedrock models are available

Architecture

  • Frontend: React 18 + TypeScript + Vite + Tailwind CSS
  • Backend: Node.js + Express + TypeScript (ES Modules)
  • State Management: Zustand
  • Real-time Communication: Server-Sent Events (SSE)
  • Streaming: Token-by-token with 50ms batching for performance

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Support

For issues, questions, or suggestions, please open an issue.


Made with ❀️ by the Shaharia Lab team

About

Multi-LLM Discussion Platform - Orchestrate discussions between multiple LLMs where users can select a primary LLM and a critic LLM to discuss specific topics

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published