Streamlit application that allows you to use different AI providers in a single interface:
- Ollama (local models)
- OpenAI (GPT-4, GPT-3.5, etc.)
- AWS Bedrock (Claude, Titan, Llama, etc.)
- Perplexity (Sonar, Llama)
- Grok (X.AI)
- Gemini (Google AI)
The application automatically detects configured providers and displays only accessible models. All interactions are tracked with Langfuse for comprehensive usage analytics and monitoring.
- ✅ Automatic provider detection of configured providers
- ✅ Unified interface for all models
- ✅ Real-time streaming for all APIs
- ✅ Robust error handling
- ✅ Persistent conversation history
- ✅ Informative sidebar showing active providers
- ✅ Langfuse integration for usage analytics and monitoring
- ✅ Containerized Langfuse for easy deployment and data privacy
# Clone and setup everything automatically
git clone <repository-url>
cd llm-playground
./setup-langfuse.sh
- Copy environment configuration:
cp .env.example .env
- Update
.env
with your API keys:
# Edit .env file with your actual API keys
# Leave Langfuse keys as default - they will be generated
- Start all services:
# Production mode
docker-compose up -d
# Or development mode with live reload
make dev-compose
-
Configure Langfuse:
- Open http://localhost:3000
- Login with
admin@localhost
/admin123
- Go to Settings → API Keys
- Generate new keys and update
.env
- Restart:
docker-compose restart ai-chat
-
Access the application:
- AI Chat: http://localhost:8501
- Langfuse Dashboard: http://localhost:3000
Copy .env.example
to .env
and configure your API keys:
# AI Provider Configuration
OLLAMA_BASE_URL=http://host.docker.internal:11434
OPENAI_API_KEY=your_openai_api_key_here
AWS_ACCESS_KEY_ID=your_aws_access_key_here
AWS_SECRET_ACCESS_KEY=your_aws_secret_key_here
AWS_REGION=us-east-1
PERPLEXITY_API_KEY=your_perplexity_api_key_here
GROK_API_KEY=your_grok_api_key_here
GEMINI_API_KEY=your_gemini_api_key_here
# Langfuse Configuration (generated from UI)
LANGFUSE_ENABLED=true
LANGFUSE_SECRET_KEY=your_langfuse_secret_key_here
LANGFUSE_PUBLIC_KEY=your_langfuse_public_key_here
LANGFUSE_HOST=http://langfuse:3000
# Langfuse Database (optional to change)
POSTGRES_DB=langfuse
POSTGRES_USER=langfuse
POSTGRES_PASSWORD=langfuse_password
# Start all services
make up # or docker-compose up -d
# View logs
make logs-compose # All services
make logs-app # App only
make logs-langfuse # Langfuse only
# Stop services
make down # or docker-compose down
# Clean up completely
make clean-compose # Remove everything
# Start development environment with live reload
make dev-compose
# Start in background
make dev-compose-detached
# View development logs
make dev-logs
# Stop development environment
make dev-down
# Display all available commands
make help
# Docker Compose operations
make up # Start all services
make down # Stop all services
make up-build # Build and start services
make logs-compose # View all logs
make restart-compose # Restart all services
make clean-compose # Clean up everything
# Development
make dev-compose # Start development environment
make dev-down # Stop development environment
make dev-clean # Clean development environment
# API Testing
make test-all-apis # Test all configured APIs
make test-openai # Test OpenAI
make test-aws # Test AWS Bedrock
make test-perplexity # Test Perplexity
make test-grok # Test Grok
make test-gemini # Test Gemini
make test-langfuse # Test Langfuse
# Langfuse
make langfuse-url # Display Langfuse URL and credentials
make setup-langfuse # Show setup instructions
# Ollama (if running locally)
make ollama-status # Check Ollama status
make ollama-models # List Ollama models
The application runs with three containers:
- PostgreSQL - Database for Langfuse
- Langfuse - Analytics and monitoring dashboard
- AI Chat App - Main Streamlit application
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ AI Chat App │───▶│ Langfuse │───▶│ PostgreSQL │
│ (Streamlit) │ │ (Analytics) │ │ (Database) │
│ Port: 8501 │ │ Port: 3000 │ │ Port: 5432 │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
External AI APIs
(OpenAI, AWS, etc.)
When enabled, Langfuse tracks:
- 📊 Usage statistics per provider and model
- 💰 Cost estimation and token usage
- ⏱️ Response times and performance metrics
- 🔍 Conversation flows and user interactions
- 🚨 Error tracking and debugging information
- 👥 Session management and user identification
- URL: http://localhost:3000
- Default login:
admin@localhost
/admin123
- Generate API keys: Settings → API Keys
- Update environment: Add keys to
.env
and restart
Problem | Solution |
---|---|
No models available | Check API keys: make test-all-apis |
Ollama connection error | Verify Ollama running: make ollama-status |
Langfuse not tracking | Check configuration: make test-langfuse |
Services won't start | Check logs: make logs-compose |
Permission denied | Make script executable: chmod +x setup-langfuse.sh |
# Check all services status
docker-compose ps
# Check specific service logs
docker-compose logs langfuse
docker-compose logs ai-chat
docker-compose logs postgres
# Restart specific service
docker-compose restart ai-chat
If you have network connectivity issues:
- Check Docker network:
docker network ls
- Verify service communication:
docker-compose exec ai-chat ping langfuse
- Reset network:
make clean-compose
thenmake up
├── main.py # Main Streamlit application
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── docker-compose.yml # Production compose file
├── docker-compose.dev.yml # Development compose file
├── Makefile # Build and deployment commands
├── setup-langfuse.sh # Automated setup script
├── .env.example # Environment template
├── .env # Your environment (not in git)
├── .dockerignore # Docker ignore rules
└── README.md # This file
- Add client initialization in
main.py
- Create generator function with Langfuse tracking
- Update
get_available_models()
function - Add environment variables to Docker configs
- Update test commands in Makefile
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export LANGFUSE_HOST=http://localhost:3000
# ... other variables ...
# Run application
streamlit run main.py
For production use:
-
Change default passwords:
POSTGRES_PASSWORD=secure_password_here NEXTAUTH_SECRET=secure_nextauth_secret_here SALT=secure_salt_here
-
Use environment-specific configs:
- Separate
.env
files for different environments - Use Docker secrets for sensitive data
- Enable HTTPS for external access
- Separate
-
Network security:
- Use internal networks for service communication
- Expose only necessary ports
- Consider using a reverse proxy
- Langfuse runs locally - your data never leaves your infrastructure
- Conversation data stored in local PostgreSQL database
- API keys managed through environment variables
- No telemetry sent to external services (disabled by default)
This project is open source. Please check the LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Test with
make test-all-apis
- Submit a pull request
- Issues: GitHub Issues
- Documentation: This README
- Langfuse docs: https://langfuse.com/docs