AI Model Communication Hub - Real-time monitoring and interaction system for HiveLLM ecosystem
The HiveLLM Chat Hub provides a centralized communication and monitoring interface for AI model interactions across the ecosystem. Originally part of BIP-05 monitoring, it now serves as the primary interface for:
- π Real-time Monitoring: Live tracking of AI model interactions
- π¬ Model Communication: Direct interface with 36 AI models (4 cursor-agent + 32 aider)
- π Activity Tracking: Live comment and discussion monitoring
- π WebSocket Integration: Real-time updates and notifications
- π§ Hybrid AI Support: Built-in cursor-agent + external aider API integration
- Node.js 18+
- NPM or PNPM
cd chat-hub
npm install
# Start the monitoring server
npm start
# or
./start-server.sh
# Access at http://localhost:3000
# Copy environment template
cp env-example.txt .env
# Configure AI model access
# Edit .env with your API keys and model configurations
- auto: Automatic model selection
- gpt-5: OpenAI GPT-5 (latest)
- sonnet-4: Anthropic Claude Sonnet 4
- opus-4.1: Anthropic Claude Opus 4.1
- OpenAI (8): chatgpt-4o-latest, gpt-4o/mini, gpt-4o-search-preview, gpt-5-mini, gpt-4.1-mini, o1-mini, gpt-4-turbo
- Anthropic (7): claude-4/sonnet-4-20250514, claude-3-7-sonnet-latest, claude-3-5 series, claude-3-opus-latest
- Google Gemini (5): gemini-2.0-flash, gemini-2.5-pro/flash, gemini-1.5-pro/flash-latest
- xAI Grok (5): grok-4/3-latest, grok-3-fast/mini-latest, grok-code-fast-1
- DeepSeek (4): deepseek-chat, deepseek-r1, deepseek-reasoner, deepseek-v3
- Groq (3): llama-3.1/3.3 variants
Use
node test-all-models.js
to test connectivity to all 36 models
- Real-time Updates: WebSocket-based live updates
- File Watching: Automatic refresh on governance file changes
- Comment Tracking: Live discussion and comment monitoring
- Model Status: Health and availability monitoring
- Performance Metrics: Response time and success rate tracking
- Dashboard: Real-time activity overview
- Model Interface: Direct communication with AI models
- Monitoring Console: Live log and event tracking
- API Testing: Built-in API testing and validation
# Development
npm start # Start development server
npm run dev # Development mode with hot reload
npm test # Run test suite
# Server management
./start-server.sh # Start production server
./stop-server.sh # Stop server gracefully
./start-monitor.sh # Start monitoring only
# Testing and debugging
node test-all-models.js # Test all AI model connections
node fix-issues.js # Debug and fix common issues
# Health check
GET /health
# Model communication
POST /api/models/:modelId/chat
GET /api/models/status
# Monitoring
GET /api/monitor/status
WebSocket /ws/monitor
# Testing
GET /api/test/cache
POST /api/test/model/:modelId
chat-hub/
βββ server.js # Main server application (160KB)
βββ index.html # Web interface (80KB)
βββ package.json # Node.js configuration
βββ start-server.sh # Server startup script
βββ stop-server.sh # Server shutdown script
βββ start-monitor.sh # Monitor startup script
βββ test-all-models.js # AI model testing utility
βββ fix-issues.js # Debug and fix utility
βββ README.md # This documentation
βββ LOGGING_README.md # Logging configuration guide
βββ MODEL_IDENTITY_GUIDELINES.md # AI model identity guidelines
βββ README-MONITOR.md # Monitoring system guide
βββ .env # Environment configuration
βββ env-example.txt # Environment template
βββ api-test-cache.json # API test cache
# Copy template and configure
cp env-example.txt .env
# Key variables:
# - AI_MODEL_APIS: API endpoints for AI models
# - MONITOR_PORT: Server port (default: 3000)
# - LOG_LEVEL: Logging level (debug, info, warn, error)
# - WEBSOCKET_ENABLED: Enable WebSocket features
See MODEL_IDENTITY_GUIDELINES.md
for complete AI model setup instructions.
- Live Dashboard: Web-based monitoring interface
- WebSocket Updates: Real-time event streaming
- Model Health: Continuous health checks
- Performance Tracking: Response time and success metrics
- Structured Logging: JSON-formatted log entries
- Multiple Levels: Debug, info, warn, error
- File Rotation: Automatic log file management
- Real-time Viewing: Live log streaming in web interface
See LOGGING_README.md
for detailed logging configuration.
- BIP-05 Protocol: Monitoring UMICP communications
- Governance System: Tracking governance activities
- AI Model Coordination: Central hub for model interactions
- Development Support: Real-time development monitoring
// Connect to monitoring WebSocket
const ws = new WebSocket('ws://localhost:3000/ws/monitor');
// Event types:
// - model_status: AI model status changes
// - communication: Inter-model communications
// - governance: Governance events
// - system: System health and performance
# Test all 36 models (4 cursor-agent + 32 aider)
node test-all-models.js
# Results will show:
# β
Working models (with successful responses)
# β Failed models (with error details)
# β οΈ Skipped models (missing API keys)
# Test specific model via API
curl -X POST http://localhost:3000/api/test/model/gpt-4
# Check API cache
curl http://localhost:3000/api/test/cache
- Cursor-Agent: Built-in models, always available, no API key required
- Aider: External API models, require API keys in
.env
file - Configuration: Copy
env-example.txt
to.env
and add your API keys
# Run diagnostic and fix common problems
node fix-issues.js
# Check logs
tail -f server-debug.log
tail -f server-errors.log
- Response Time: <100ms for monitoring queries
- WebSocket Latency: <10ms for real-time updates
- Model Communication: Depends on AI provider response times
- Memory Usage: ~50MB base + model response caching
- Concurrent Connections: Supports 100+ WebSocket connections
- Model Polling: Configurable polling intervals
- Cache Management: Automatic cleanup of old data
- Log Rotation: Prevents disk space issues
This chat hub is part of the HiveLLM ecosystem and integrates with:
- gov/: Governance monitoring and event tracking
- ts-workspace/: TypeScript package communication monitoring
- umicp/: Protocol communication monitoring
- cursor-extension/: IDE integration for development monitoring
MIT License - See ../LICENSE file for details.
Component: HiveLLM Chat Hub (formerly BIP-05 Monitor)
Purpose: Central communication and monitoring hub
Status: β
Operational with 36 AI model support