Skip to content

pi-2r/LLM-playground

Repository files navigation

LLM Playground - Multi-Provider AI Chat

Streamlit application that allows you to use different AI providers in a single interface:

  • Ollama (local models)
  • OpenAI (GPT-4, GPT-3.5, etc.)
  • AWS Bedrock (Claude, Titan, Llama, etc.)
  • Perplexity (Sonar, Llama)
  • Grok (X.AI)
  • Gemini (Google AI)

The application automatically detects configured providers and displays only accessible models. All interactions are tracked with Langfuse for comprehensive usage analytics and monitoring.

Features

  • Automatic provider detection of configured providers
  • Unified interface for all models
  • Real-time streaming for all APIs
  • Robust error handling
  • Persistent conversation history
  • Informative sidebar showing active providers
  • Langfuse integration for usage analytics and monitoring
  • Containerized Langfuse for easy deployment and data privacy

Quick Start

Option 1: Automated Setup (Recommended)

# Clone and setup everything automatically
git clone <repository-url>
cd llm-playground
./setup-langfuse.sh

Option 2: Manual Setup with Docker Compose

  1. Copy environment configuration:
cp .env.example .env
  1. Update .env with your API keys:
# Edit .env file with your actual API keys
# Leave Langfuse keys as default - they will be generated
  1. Start all services:
# Production mode
docker-compose up -d

# Or development mode with live reload
make dev-compose
  1. Configure Langfuse:

    • Open http://localhost:3000
    • Login with admin@localhost / admin123
    • Go to Settings → API Keys
    • Generate new keys and update .env
    • Restart: docker-compose restart ai-chat
  2. Access the application:

Configuration

Environment Variables

Copy .env.example to .env and configure your API keys:

# AI Provider Configuration
OLLAMA_BASE_URL=http://host.docker.internal:11434
OPENAI_API_KEY=your_openai_api_key_here
AWS_ACCESS_KEY_ID=your_aws_access_key_here
AWS_SECRET_ACCESS_KEY=your_aws_secret_key_here
AWS_REGION=us-east-1
PERPLEXITY_API_KEY=your_perplexity_api_key_here
GROK_API_KEY=your_grok_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here

# Langfuse Configuration (generated from UI)
LANGFUSE_ENABLED=true
LANGFUSE_SECRET_KEY=your_langfuse_secret_key_here
LANGFUSE_PUBLIC_KEY=your_langfuse_public_key_here
LANGFUSE_HOST=http://langfuse:3000

# Langfuse Database (optional to change)
POSTGRES_DB=langfuse
POSTGRES_USER=langfuse
POSTGRES_PASSWORD=langfuse_password

Docker Compose Commands

Production Deployment

# Start all services
make up                    # or docker-compose up -d

# View logs
make logs-compose         # All services
make logs-app            # App only
make logs-langfuse       # Langfuse only

# Stop services
make down                # or docker-compose down

# Clean up completely
make clean-compose       # Remove everything

Development Mode

# Start development environment with live reload
make dev-compose

# Start in background
make dev-compose-detached

# View development logs
make dev-logs

# Stop development environment
make dev-down

Makefile Commands

# Display all available commands
make help

# Docker Compose operations
make up                  # Start all services
make down               # Stop all services
make up-build           # Build and start services
make logs-compose       # View all logs
make restart-compose    # Restart all services
make clean-compose      # Clean up everything

# Development
make dev-compose        # Start development environment
make dev-down          # Stop development environment
make dev-clean         # Clean development environment

# API Testing
make test-all-apis     # Test all configured APIs
make test-openai       # Test OpenAI
make test-aws          # Test AWS Bedrock
make test-perplexity   # Test Perplexity
make test-grok         # Test Grok
make test-gemini       # Test Gemini
make test-langfuse     # Test Langfuse

# Langfuse
make langfuse-url      # Display Langfuse URL and credentials
make setup-langfuse    # Show setup instructions

# Ollama (if running locally)
make ollama-status     # Check Ollama status
make ollama-models     # List Ollama models

Architecture

The application runs with three containers:

  1. PostgreSQL - Database for Langfuse
  2. Langfuse - Analytics and monitoring dashboard
  3. AI Chat App - Main Streamlit application
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   AI Chat App   │───▶│    Langfuse     │───▶│   PostgreSQL    │
│   (Streamlit)   │    │   (Analytics)   │    │   (Database)    │
│   Port: 8501    │    │   Port: 3000    │    │   Port: 5432    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │
         ▼
  External AI APIs
  (OpenAI, AWS, etc.)

Langfuse Monitoring

When enabled, Langfuse tracks:

  • 📊 Usage statistics per provider and model
  • 💰 Cost estimation and token usage
  • ⏱️ Response times and performance metrics
  • 🔍 Conversation flows and user interactions
  • 🚨 Error tracking and debugging information
  • 👥 Session management and user identification

Accessing Langfuse Dashboard

  1. URL: http://localhost:3000
  2. Default login: admin@localhost / admin123
  3. Generate API keys: Settings → API Keys
  4. Update environment: Add keys to .env and restart

Troubleshooting

Common Issues

Problem Solution
No models available Check API keys: make test-all-apis
Ollama connection error Verify Ollama running: make ollama-status
Langfuse not tracking Check configuration: make test-langfuse
Services won't start Check logs: make logs-compose
Permission denied Make script executable: chmod +x setup-langfuse.sh

Service Health Checks

# Check all services status
docker-compose ps

# Check specific service logs
docker-compose logs langfuse
docker-compose logs ai-chat
docker-compose logs postgres

# Restart specific service
docker-compose restart ai-chat

Network Issues

If you have network connectivity issues:

  1. Check Docker network: docker network ls
  2. Verify service communication: docker-compose exec ai-chat ping langfuse
  3. Reset network: make clean-compose then make up

Development

File Structure

├── main.py                    # Main Streamlit application
├── requirements.txt           # Python dependencies
├── Dockerfile                # Docker configuration
├── docker-compose.yml        # Production compose file
├── docker-compose.dev.yml    # Development compose file
├── Makefile                  # Build and deployment commands
├── setup-langfuse.sh         # Automated setup script
├── .env.example              # Environment template
├── .env                      # Your environment (not in git)
├── .dockerignore             # Docker ignore rules
└── README.md                 # This file

Adding New AI Providers

  1. Add client initialization in main.py
  2. Create generator function with Langfuse tracking
  3. Update get_available_models() function
  4. Add environment variables to Docker configs
  5. Update test commands in Makefile

Local Development (without Docker)

# Install dependencies
pip install -r requirements.txt

# Set environment variables
export LANGFUSE_HOST=http://localhost:3000
# ... other variables ...

# Run application
streamlit run main.py

Security Considerations

Production Deployment

For production use:

  1. Change default passwords:

    POSTGRES_PASSWORD=secure_password_here
    NEXTAUTH_SECRET=secure_nextauth_secret_here
    SALT=secure_salt_here
  2. Use environment-specific configs:

    • Separate .env files for different environments
    • Use Docker secrets for sensitive data
    • Enable HTTPS for external access
  3. Network security:

    • Use internal networks for service communication
    • Expose only necessary ports
    • Consider using a reverse proxy

Data Privacy

  • Langfuse runs locally - your data never leaves your infrastructure
  • Conversation data stored in local PostgreSQL database
  • API keys managed through environment variables
  • No telemetry sent to external services (disabled by default)

License

This project is open source. Please check the LICENSE file for details.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Test with make test-all-apis
  5. Submit a pull request

Support

About

Llm playground with multi provider AI

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published