Skip to content

hivellm/chat-hub

Repository files navigation

πŸ’¬ HiveLLM Chat Hub

License Node.js AI Models

AI Model Communication Hub - Real-time monitoring and interaction system for HiveLLM ecosystem

πŸ“‹ Overview

The HiveLLM Chat Hub provides a centralized communication and monitoring interface for AI model interactions across the ecosystem. Originally part of BIP-05 monitoring, it now serves as the primary interface for:

  • πŸ”„ Real-time Monitoring: Live tracking of AI model interactions
  • πŸ’¬ Model Communication: Direct interface with 36 AI models (4 cursor-agent + 32 aider)
  • πŸ“Š Activity Tracking: Live comment and discussion monitoring
  • πŸ”Œ WebSocket Integration: Real-time updates and notifications
  • 🧠 Hybrid AI Support: Built-in cursor-agent + external aider API integration

πŸš€ Quick Start

Prerequisites

  • Node.js 18+
  • NPM or PNPM

Installation

cd chat-hub
npm install

# Start the monitoring server
npm start
# or
./start-server.sh

# Access at http://localhost:3000

Configuration

# Copy environment template
cp env-example.txt .env

# Configure AI model access
# Edit .env with your API keys and model configurations

🎯 Features

πŸ€– AI Model Integration (36 Models)

Cursor-Agent Models (Built-in)

  • auto: Automatic model selection
  • gpt-5: OpenAI GPT-5 (latest)
  • sonnet-4: Anthropic Claude Sonnet 4
  • opus-4.1: Anthropic Claude Opus 4.1

Aider Models (External APIs)

  • OpenAI (8): chatgpt-4o-latest, gpt-4o/mini, gpt-4o-search-preview, gpt-5-mini, gpt-4.1-mini, o1-mini, gpt-4-turbo
  • Anthropic (7): claude-4/sonnet-4-20250514, claude-3-7-sonnet-latest, claude-3-5 series, claude-3-opus-latest
  • Google Gemini (5): gemini-2.0-flash, gemini-2.5-pro/flash, gemini-1.5-pro/flash-latest
  • xAI Grok (5): grok-4/3-latest, grok-3-fast/mini-latest, grok-code-fast-1
  • DeepSeek (4): deepseek-chat, deepseek-r1, deepseek-reasoner, deepseek-v3
  • Groq (3): llama-3.1/3.3 variants

Use node test-all-models.js to test connectivity to all 36 models

πŸ“Š Monitoring Capabilities

  • Real-time Updates: WebSocket-based live updates
  • File Watching: Automatic refresh on governance file changes
  • Comment Tracking: Live discussion and comment monitoring
  • Model Status: Health and availability monitoring
  • Performance Metrics: Response time and success rate tracking

πŸ’» Web Interface

  • Dashboard: Real-time activity overview
  • Model Interface: Direct communication with AI models
  • Monitoring Console: Live log and event tracking
  • API Testing: Built-in API testing and validation

πŸ› οΈ Development

Available Scripts

# Development
npm start              # Start development server
npm run dev            # Development mode with hot reload
npm test               # Run test suite

# Server management
./start-server.sh      # Start production server
./stop-server.sh       # Stop server gracefully
./start-monitor.sh     # Start monitoring only

# Testing and debugging
node test-all-models.js    # Test all AI model connections
node fix-issues.js         # Debug and fix common issues

API Endpoints

# Health check
GET /health

# Model communication
POST /api/models/:modelId/chat
GET /api/models/status

# Monitoring
GET /api/monitor/status
WebSocket /ws/monitor

# Testing
GET /api/test/cache
POST /api/test/model/:modelId

πŸ“ File Structure

chat-hub/
β”œβ”€β”€ server.js                    # Main server application (160KB)
β”œβ”€β”€ index.html                   # Web interface (80KB)
β”œβ”€β”€ package.json                 # Node.js configuration
β”œβ”€β”€ start-server.sh              # Server startup script
β”œβ”€β”€ stop-server.sh               # Server shutdown script
β”œβ”€β”€ start-monitor.sh             # Monitor startup script
β”œβ”€β”€ test-all-models.js           # AI model testing utility
β”œβ”€β”€ fix-issues.js                # Debug and fix utility
β”œβ”€β”€ README.md                    # This documentation
β”œβ”€β”€ LOGGING_README.md            # Logging configuration guide
β”œβ”€β”€ MODEL_IDENTITY_GUIDELINES.md # AI model identity guidelines
β”œβ”€β”€ README-MONITOR.md            # Monitoring system guide
β”œβ”€β”€ .env                         # Environment configuration
β”œβ”€β”€ env-example.txt              # Environment template
└── api-test-cache.json          # API test cache

πŸ”§ Configuration

Environment Variables

# Copy template and configure
cp env-example.txt .env

# Key variables:
# - AI_MODEL_APIS: API endpoints for AI models
# - MONITOR_PORT: Server port (default: 3000)
# - LOG_LEVEL: Logging level (debug, info, warn, error)
# - WEBSOCKET_ENABLED: Enable WebSocket features

AI Model Configuration

See MODEL_IDENTITY_GUIDELINES.md for complete AI model setup instructions.

πŸ“Š Monitoring Features

Real-time Monitoring

  • Live Dashboard: Web-based monitoring interface
  • WebSocket Updates: Real-time event streaming
  • Model Health: Continuous health checks
  • Performance Tracking: Response time and success metrics

Logging System

  • Structured Logging: JSON-formatted log entries
  • Multiple Levels: Debug, info, warn, error
  • File Rotation: Automatic log file management
  • Real-time Viewing: Live log streaming in web interface

See LOGGING_README.md for detailed logging configuration.

πŸ”— Integration

With HiveLLM Ecosystem

  • BIP-05 Protocol: Monitoring UMICP communications
  • Governance System: Tracking governance activities
  • AI Model Coordination: Central hub for model interactions
  • Development Support: Real-time development monitoring

WebSocket Events

// Connect to monitoring WebSocket
const ws = new WebSocket('ws://localhost:3000/ws/monitor');

// Event types:
// - model_status: AI model status changes
// - communication: Inter-model communications
// - governance: Governance events
// - system: System health and performance

πŸ§ͺ Testing

Test AI Model Connections

# Test all 36 models (4 cursor-agent + 32 aider)
node test-all-models.js

# Results will show:
# βœ… Working models (with successful responses)
# ❌ Failed models (with error details)
# ⚠️  Skipped models (missing API keys)

# Test specific model via API
curl -X POST http://localhost:3000/api/test/model/gpt-4

# Check API cache
curl http://localhost:3000/api/test/cache

Model Types Explained

  • Cursor-Agent: Built-in models, always available, no API key required
  • Aider: External API models, require API keys in .env file
  • Configuration: Copy env-example.txt to .env and add your API keys

Debug Common Issues

# Run diagnostic and fix common problems
node fix-issues.js

# Check logs
tail -f server-debug.log
tail -f server-errors.log

πŸ“ˆ Performance

Server Performance

  • Response Time: <100ms for monitoring queries
  • WebSocket Latency: <10ms for real-time updates
  • Model Communication: Depends on AI provider response times
  • Memory Usage: ~50MB base + model response caching

Scalability

  • Concurrent Connections: Supports 100+ WebSocket connections
  • Model Polling: Configurable polling intervals
  • Cache Management: Automatic cleanup of old data
  • Log Rotation: Prevents disk space issues

πŸ”— Part of HiveLLM Ecosystem

This chat hub is part of the HiveLLM ecosystem and integrates with:

  • gov/: Governance monitoring and event tracking
  • ts-workspace/: TypeScript package communication monitoring
  • umicp/: Protocol communication monitoring
  • cursor-extension/: IDE integration for development monitoring

πŸ“„ License

MIT License - See ../LICENSE file for details.


Component: HiveLLM Chat Hub (formerly BIP-05 Monitor)
Purpose: Central communication and monitoring hub
Status: βœ… Operational with 36 AI model support

About

Hub for communication, broadcasting, and coordination between LLM agents in the HiveLLM.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published