Natural language Linux orchestrator: describe what you want → LLM generates commands → validated for safety → executed in Docker containers.
Next.js Frontend → FastAPI Backend → Rust CLI (llmos-exec) → Docker
- Frontend (
frontend/): Next.js app with xterm.js terminal, WebSocket live streaming - Backend (
backend/): FastAPI with SQLite, job queue, LLM integration (OpenAI) - Rust CLI (
llmos-exec/): Streams NDJSON from Docker containers (stdout/stderr + exit code)
Ubuntu, Debian, Fedora, Arch, Alpine, CentOS/Rocky, NixOS
- Docker, Node.js 18+, Python 3.11+, Rust toolchain
cd backend
cp .env.example .env # Add your OPENAI_API_KEY
pip install -r requirements.txt
python start_uvicorn.pycd frontend
npm install
npm run devcd llmos-exec
cargo build --release| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | (required) |
OPENAI_MODEL |
Model to use | gpt-4o-mini |
LLM_PROVIDER |
LLM provider | openai |
DATABASE_URL |
SQLite path | sqlite:///./llmos.db |
NEXT_PUBLIC_API_URL |
Backend URL for frontend | http://localhost:8000 |
POST /generate- Generate commands from natural languagePOST /validate- Validate commands for safetyPOST /execute- Execute commands (SSE streaming)POST /enqueue- Queue execution for async processingGET /history- Execution historyGET /executions/{id}/status- Job statusWS /ws/execute-live- WebSocket live terminalWS /ws/execute/{id}- WebSocket stream for existing job
The Rust CLI accepts resource limits via the JSON payload:
{
"distro": "ubuntu",
"commands": ["echo hello"],
"memory_limit": "512m",
"cpu_limit": "1.0",
"container_name": "my-session",
"keep_alive": true
}