๐ Production-ready workflow automation system powered by LLMs
A Docker-based project for creating and executing automated workflows using Large Language Models. Built with Python, Docker, and includes a Mock API for testing without real API keys.
- ๐ณ Docker-based - Containerized for easy deployment
- ๐งช Mock API - Test without real API keys using built-in OpenRouter simulator
- โก 5 Built-in Actions - Print, Chat, HTTP, File Read/Write
- ๐ Monitoring Ready - Prometheus metrics support
- ๐ Production Ready - Multi-stage builds, security hardening, health checks
- ๐ฏ Easy to Use - YAML-based workflow definition
- ๐ Extensible - Plugin system for custom actions
# Clone the repository
git clone https://github.com/YOUR_USERNAME/llms-os.git
cd llms-os
# Build the complete project from YAML configuration
python3 build_project.pyThis generates the complete project in llms-os-project/ directory.
# Navigate to generated project
cd llms-os-project
# Start Mock API server
docker-compose up -d mock-api
# Wait for it to be ready
sleep 3
# Run a test workflow using the convenience script
./run-workflow.sh
# Or run manually
docker run --rm \
--network llms-os-project_llms-network \
-e OPENROUTER_API_URL=http://llms-mock-api:8000/api/v1 \
-e OPENROUTER_API_KEY=sk-simulated-key \
-v $(pwd)/workflows:/app/workflows \
llms-os:latest workflows/test_basic.yamlYou should see output like:
๐ Starting basic test workflow...
Environment: INFO
Health check status: 200
AI Response: I'm a mock AI assistant helping you test your workflow.
โ
Basic test completed successfully!
- GETTING_STARTED.md - Complete setup guide
- llms-os-project/README.md - Full project documentation (generated)
- llms-os-project/USAGE.md - Quick reference guide (generated)
llms-os/ # This repository
โโโ build_project.py # Build script
โโโ llms-os-docker-project-enhanced.yaml # Source configuration
โโโ GETTING_STARTED.md # Setup guide
โโโ llms-os-project/ # Generated project
โโโ docker-compose.yml # Service orchestration
โโโ Makefile # Build automation
โโโ run-workflow.sh # Quick run script
โโโ llms-os/ # Main application
โ โโโ Dockerfile # Production image (Alpine, 168MB)
โ โโโ requirements.txt
โ โโโ src/LLMs_OS/
โ โโโ core.py # Workflow engine
โ โโโ cli.py # CLI interface
โ โโโ registry.py # Action registry
โ โโโ actions/ # Built-in actions
โโโ mock-api/ # Mock OpenRouter API
โ โโโ Dockerfile # API image (146MB)
โ โโโ app.py # Flask-based mock server
โโโ workflows/ # Your YAML workflows
โโโ test_basic.yaml
โโโ test_advanced.yaml
| Action | Description | Example |
|---|---|---|
print_message |
Display formatted messages | message: "Hello!" style: success |
chat_completion |
Call LLM API (OpenRouter compatible) | model: "gpt-3.5-turbo" |
http_request |
Make HTTP requests | url: "..." method: GET |
file_read |
Read file content | path: "data.txt" |
file_write |
Write to files | path: "output.txt" |
Create workflows/my_workflow.yaml:
metadata:
title: "My First Workflow"
version: "1.0.0"
tasks:
- action: print_message
message: "๐ Starting my workflow!"
style: success
- action: http_request
url: "http://llms-mock-api:8000/api/v1/models"
method: GET
save_as: models
- action: chat_completion
model: "openai/gpt-3.5-turbo"
messages:
- role: user
content: "Write a haiku about Docker"
save_as: poem
- action: print_message
message: "{{ poem.content }}"
style: successRun it:
./run-workflow.sh workflows/my_workflow.yaml- Docker 20.10+
- Docker Compose 2.0+
- Python 3.10+ (for build script)
- 2GB RAM minimum
- 5GB disk space
After building, you'll have:
- llms-os:latest (168MB) - Alpine-based, multi-stage build
- llms-os-mock-api:latest (146MB) - Flask-based mock API
# Build the project structure
python3 build_project.py
# Navigate to project
cd llms-os-project
# Build Docker images
docker-compose build
# Start services
docker-compose up -dcd llms-os-project
# Run test suite
docker-compose run --rm llms-os pytest /app/tests/ -v
# Run specific workflow
./run-workflow.sh workflows/test_basic.yaml- Create new action in
llms-os-project/llms-os/src/LLMs_OS/actions/ - Register with
@register('action_name')decorator - Rebuild Docker image:
docker-compose build llms-os
# Get API key from https://openrouter.ai/
docker run --rm \
-e OPENROUTER_API_URL=https://openrouter.ai/api/v1 \
-e OPENROUTER_API_KEY=your-actual-api-key \
-v $(pwd)/workflows:/app/workflows \
llms-os:latest workflows/your_workflow.yamlStart Prometheus + Grafana monitoring:
cd llms-os-project
docker-compose -f docker-compose.yml -f docker-compose.monitoring.yml up -d
# Access dashboards:
# Prometheus: http://localhost:9090
# Grafana: http://localhost:3000 (admin/admin)cd llms-os-project
docker-compose down
docker-compose build mock-api
docker-compose up -d mock-apiRebuild the image:
docker-compose build llms-osEdit docker-compose.yml and change port mappings.
- Configuration:
llms-os-docker-project-enhanced.yaml- Source of truth - Build Script:
build_project.py- Generates project from YAML - Examples:
llms-os-project/workflows/- Sample workflows
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT License - see LICENSE file for details
- Documentation: See GETTING_STARTED.md
- Issues: Open an issue on GitHub
- Discussions: GitHub Discussions
- API testing and automation
- LLM workflow orchestration
- Data processing pipelines
- Content generation automation
- DevOps task automation
- Custom AI-powered tools
- Kubernetes deployment configs
- More built-in actions
- Web UI for workflow management
- Workflow scheduling
- Result persistence layer
- Authentication & authorization
Built with โค๏ธ using Python, Docker, and LLMs
Star โญ this repo if you find it useful!