A comprehensive Streamlit application that serves as an intelligent chatbot for FleetPulse package update monitoring. The application integrates multiple GenAI APIs and leverages the Model Context Protocol (MCP) to help you analyze package update history and audit your fleet's update patterns.
FleetPulse is a lightweight dashboard for monitoring Linux package updates across your fleet. It receives update reports from Ansible playbooks and provides read-only analysis of historical update data.
- Automatic Expert Selection: No manual selection needed - the system automatically determines the best expert based on your query
- Context-Aware: Considers conversation history and patterns for better routing decisions
- 95%+ Accuracy: Proven routing accuracy across Linux admin, Ansible, Updates, and FleetPulse domains
- Transparent Decisions: Shows confidence levels and reasoning behind expert selection
- Override Options: Manual expert selection when needed
- OpenAI GPT-4: Industry-leading language model
- Anthropic Claude: Advanced reasoning and safety
- Google Gemini: Multimodal AI capabilities
- Azure OpenAI: Enterprise-grade deployment
- Ollama: Local/private model hosting
- π§ Linux System Admin: Package management, troubleshooting, system configuration
- βοΈ Ansible Automation: Playbooks, roles, infrastructure as code
- π¦ Package Update Analyst: Update history analysis, compliance reporting, trend analysis
- π FleetPulse Operations: Basic fleet monitoring, update history analysis, package information queries
- Microsoft Semantic Kernel Integration: Unified AI orchestration across providers
- Model Context Protocol (MCP): Direct integration with FleetPulse backend for read-only update history analysis
- Package Update Monitoring: View and analyze package update reports from Ansible playbooks
- Conversation Management: Persistent chat history with SQLite
- Tool Integration: Automated analysis of package update reports
- Docker Deployment: Complete containerization with docker-compose
- Python 3.11+
- Docker and Docker Compose (for containerized deployment)
- At least one configured AI provider (OpenAI, Anthropic, Google, Azure, or Ollama)
- Access to FleetPulse backend API (for package update history analysis)
-
Clone the repository:
git clone https://github.com/wesback/fleetpulse-chat.git cd fleetpulse-chat
-
Create virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment variables:
cp .env.example .env # Edit .env with your configuration
-
Run the application:
streamlit run app.py
-
Using docker-compose (recommended):
# Clone and configure git clone https://github.com/wesback/fleetpulse-chat.git cd fleetpulse-chat # Create .env file if not present cp .env.example .env # If .env.example does not exist, create it based on README variables # Edit .env with your settings # Start services docker-compose up -d
-
Using Docker directly:
# Build image docker build -t fleetpulse-chat . # Run container docker run -p 8501:8501 \ -e OPENAI_API_KEY=your_key \ -e FLEETPULSE_API_URL=http://localhost:8000 \ fleetpulse-chat
# AI Provider Selection
GENAI_PROVIDER=openai # openai, anthropic, google, azure, ollama
# AI Provider API Keys
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-api03-your-anthropic-key
GOOGLE_API_KEY=your-google-api-key
GOOGLE_MODEL=gemini-1.5-flash # Optional: gemini-1.5-flash, gemini-1.5-pro, gemini-1.0-pro
AZURE_OPENAI_KEY=your-azure-key
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_DEPLOYMENT_NAME=gpt-4 # Optional: your Azure OpenAI deployment name
# FleetPulse Integration
FLEETPULSE_API_URL=http://localhost:8000
FLEETPULSE_MCP_SERVER=./fleetpulse-mcp
# Application Settings
STREAMLIT_SERVER_PORT=8501
LOG_LEVEL=INFO
ENABLE_DEBUG=false
SECRET_KEY=your-secret-key
# Database
DATABASE_URL=sqlite:///fleetpulse_chat.db
# Local AI (Ollama)
OLLAMA_BASE_URL=http://localhost:11434
- Get API key from https://platform.openai.com/api-keys
- Set
OPENAI_API_KEY=sk-your-key
- Get API key from https://console.anthropic.com/
- Set
ANTHROPIC_API_KEY=sk-ant-api03-your-key
- Get API key from https://makersuite.google.com/app/apikey
- Set
GOOGLE_API_KEY=your-key
- Set up Azure OpenAI resource
- Set
AZURE_OPENAI_KEY=your-key
andAZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
- Install Ollama: https://ollama.ai/
- Start Ollama service:
ollama serve
- Pull a model:
ollama pull llama2
- Set
OLLAMA_BASE_URL=http://localhost:11434
-
Access the application at http://localhost:8501
-
Select an AI Provider from the dropdown
-
Choose an Expert Mode based on your needs:
- General Assistant: Multi-purpose FleetPulse helper
- Linux System Admin: Package management and system operations
- Ansible Automation: Playbook development and automation
- Package Update Manager: Fleet update coordination
- FleetPulse Operations: Platform-specific operations
-
Start chatting with questions like:
- "What packages were updated on server-01 last week?"
- "Show me the update history for nginx across all hosts"
- "Which hosts have been updated most recently?"
- "Generate a compliance report for security updates"
Specialized for:
- Package management across distributions (apt, yum, dnf, pacman)
- System monitoring and troubleshooting
- Security hardening and best practices
- Performance optimization
Expert in:
- Playbook development and best practices
- Inventory management
- Role creation and Galaxy usage
- CI/CD integration
Focused on:
- Package update history analysis and trends
- Compliance and security patch reporting
- Update pattern analysis across the fleet
- Audit trail generation and review
Specialized for:
- FleetPulse API operations and update report analysis
- Package update history queries and insights
- Update trend analysis and reporting
- Troubleshooting platform issues
The chatbot automatically detects when to use these FleetPulse tools:
- list_hosts(): List all hosts with basic metadata
- get_host_details(hostname): Detailed host information and update history
- get_update_reports(hostname, days): Package update reports with filtering
- list_packages(search): Search packages across the fleet
- get_package_details(package_name): Detailed package information
- get_fleet_statistics(): Aggregate statistics and activity metrics
- health_check(): Backend and MCP server health status
Toggle the interactive dashboard to view:
- Package update history and trends
- Host update activity charts
- Update frequency visualization
- Compliance and audit summaries
- Package distribution analysis
The FleetPulse chatbot features an advanced expert routing system that automatically selects the most appropriate expert based on your query content. No manual selection needed!
- Keyword Analysis: Detects domain-specific terms and commands
- Pattern Recognition: Recognizes code snippets, API calls, and command syntax
- Context Awareness: Considers conversation history for continuity
- Confidence Scoring: Provides transparency into routing decisions
π¬ "How do I check disk space on my servers?"
π― Routes to: π§ Linux System Admin (85% confidence)
π Keywords: disk, servers, check
π¬ "Write an Ansible playbook to install nginx"
π― Routes to: βοΈ Ansible Automation Expert (92% confidence)
π Keywords: ansible, playbook, install
π¬ "Schedule security patches for my fleet"
π― Routes to: π¦ Package Update Manager (88% confidence)
π Keywords: schedule, security, patches, fleet
π¬ "Get the status of all hosts in FleetPulse"
π― Routes to: π FleetPulse Operations (95% confidence)
π Keywords: status, hosts, fleetpulse
- Low Confidence Handling: Shows alternative experts when confidence is low
- Context Continuity: Maintains expert selection across related questions
- Manual Override: Option to manually select different expert
- Routing Insights: Detailed analysis of routing decisions (optional)
Run the routing tests to see performance:
python examples/test_expert_routing.py
Expected accuracy: 95%+ across all expert domains
Try the interactive routing demo:
python examples/interactive_routing_demo.py
This chatbot integrates with FleetPulse to provide intelligent analysis of your fleet's package update data:
- Package Update History: Records of what packages were updated, when, and on which hosts
- Host Information: Basic metadata about servers in your fleet (OS, last update, etc.)
- Package Search: Find packages across your fleet and see update history
- Simple Analytics: Basic statistics about update activity
- β Real-time monitoring: It's a historical record, not live system monitoring
- β Update scheduling: It tracks updates but doesn't schedule or execute them
- β System metrics: No CPU, memory, or performance monitoring
- β Security scanning: No vulnerability assessment or security analysis
- β Package management: No installation, removal, or dependency management
health_check
- Check if FleetPulse backend is accessiblelist_hosts
- Get list of hosts with basic metadataget_host_details
- Get detailed information about a specific hostget_update_reports
- Retrieve package update reports with filteringget_host_reports
- Get update reports for a specific hostlist_packages
- List packages across the fleet with searchget_package_details
- Get detailed package informationget_fleet_statistics
- Basic aggregate statisticssearch
- Search across hosts, packages, and reports
fleetpulse-chatbot/
βββ app.py # Main Streamlit application
βββ config/
β βββ __init__.py # Configuration management
β βββ settings.py # Settings with environment variables
β βββ prompts.py # System prompt definitions
βββ core/
β βββ genai_manager.py # Multi-provider AI coordination
β βββ mcp_client.py # Model Context Protocol integration
β βββ conversation.py # Chat history management
βββ ui/
β βββ components.py # Custom Streamlit components
β βββ dashboard.py # Fleet dashboard integration
βββ utils/
β βββ validators.py # Input validation utilities
β βββ helpers.py # Helper functions
βββ tests/ # Test suite
- User Input β Streamlit UI
- Message Processing β GenAI Manager
- Tool Detection β MCP Client
- FleetPulse Integration β Backend API
- AI Response β Selected Provider (OpenAI/Anthropic/etc.)
- Response Display β Streamlit UI
- Conversation Storage β SQLite Database
Run the test suite:
# Unit tests
pytest tests/
# With coverage
pytest --cov=. tests/
# Specific test categories
pytest tests/test_genai.py -v
pytest tests/test_mcp.py -v
pytest tests/test_ui.py -v
- API Keys: Never commit API keys to version control
- Environment Variables: Use secure secret management in production
- Input Validation: All user inputs are sanitized
- Rate Limiting: Built-in protection against API abuse
- Docker Secrets: Use Docker secrets for sensitive data in production
# Use Docker secrets
echo "your-openai-key" | docker secret create openai_api_key -
# Update docker-compose.yml to use secrets
services:
fleetpulse-chat:
secrets:
- openai_api_key
environment:
- OPENAI_API_KEY_FILE=/run/secrets/openai_api_key
- Application health:
http://localhost:8501/_stcore/health
- Component status available in sidebar
- Docker health checks included
Logs are available at different levels:
# Set log level
export LOG_LEVEL=DEBUG
# View logs
docker-compose logs -f fleetpulse-chat
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make changes following the coding standards
- Add tests for new functionality
- Run tests:
pytest
- Submit a pull request
- Follow PEP 8 for Python code
- Use type hints for all functions
- Add docstrings for all modules, classes, and functions
- Implement comprehensive error handling
- Write tests for new features
This project is licensed under the MIT License - see the LICENSE file for details.
No AI providers available:
- Verify at least one API key is configured
- Check API key format and validity
- Test connectivity to AI provider APIs
FleetPulse connection failed:
- Verify
FLEETPULSE_API_URL
is correct - Check FleetPulse backend is running
- Verify network connectivity
Database errors:
- Ensure write permissions for database file
- Check disk space for SQLite database
- Verify database URL format
Docker issues:
- Check Docker and docker-compose versions
- Verify port availability (8501)
- Review container logs:
docker-compose logs
- Check the Issues page
- Review logs for detailed error messages
- Ensure all requirements are met
- Test with minimal configuration first
- Advanced conversation threading
- Custom tool development framework
- Integration with more monitoring systems
- Mobile-responsive UI improvements
- Advanced analytics and reporting
- Multi-tenant support
- Plugin architecture
- Voice interface integration
For questions and support:
- GitHub Issues: https://github.com/wesback/fleetpulse-chat/issues
- Documentation: This README and inline code documentation
- Examples: See the
/examples
directory for usage scenarios