OpenTower Linux Ops is a CLI-first Linux operations assistant. It accepts natural-language requests, routes them into a fixed multi-stage workflow, applies safety checks before execution, and returns structured, human-readable results.
This repository keeps the runtime surface intentionally narrow and audited. It focuses on a small set of Linux inspection and user-management tasks, model-assisted recovery for in-scope paraphrases, and structured handling for requests that fall outside the shipped workflow set or cross safety boundaries.
Current workflow surface:
disk-inspectiondisk_usagedisk_usage_with_logs
file-searchfilename_searchcontent_searchinspect_permissionstail_logvia read-only fallbackrecent_error_scanvia read-only fallbackdelete_pathandchmod_recursiveremain guarded by the security layer
process-port-inspectionport_lookuptop_memoryservice_statustop_cpuvia read-only fallbackload_averagevia read-only fallbackuptime_summaryvia read-only fallback
user-managementlist_usersinspect_usercreate_useradd_user_to_groupdelete_userbatch_delete_users
The fixed agent chain is:
intent-parsersecurity-guardcommand-plannerresult-analyst
OpenTower keeps the runtime surface intentionally narrow and explicit.
Today it focuses on:
- Linux inspection and troubleshooting requests that map onto the shipped workflow catalog
- confirmation-gated user and permission operations
- structured routing, safety checks, command planning, and readable result summaries
- structured handling when a request does not map onto the current workflow set
Every request goes through a three-stage routing chain:
local_ruleDeterministic parsing for the shipped Linux ops workflows.llm_normalizerMaps in-scope paraphrases back onto already-implemented operations.fallback_researchRead-only recovery for a small set of low-risk inspection tasks.
Every dispatch result exposes:
resolution_statusresolution_sourceresolution_reason
Current resolution sources are:
local_rulellm_normalizerfallback_research
- High-risk writes are either blocked or forced through explicit confirmation.
create_userandadd_user_to_groupnow go through the confirmation flow instead of executing immediately.- Critical destructive requests such as deleting core system paths are blocked before command generation.
- Fallback behavior is read-only by design.
The current architecture also centralizes operation metadata and confirmation replay context. The normalizer, fallback path, and confirmation resolution now share the same operation catalog and persisted execution context, which reduces drift across routing and replay stages.
User-facing commands:
workflowdispatchconsoleprovider-statusauth
Natural-language input is the default entrypoint. These are equivalent ways to use the tool:
python -m opentower_cli "show disk usage"
python -m opentower_cli "find nginx config files"
python -m opentower_cli "check sshd service status"
python -m opentower_cli "show cpu usage"python -m opentower_cli dispatch --objective "show disk usage" --executeWith no arguments, the CLI starts the interactive console:
python -m opentower_cliSlash-prefixed commands are also accepted:
python -m opentower_cli /workflow
python -m opentower_cli /provider-status
python -m opentower_cli /authRequires Python 3.11+.
Install in editable mode:
python -m pip install -e .[dev]Create a local provider profile:
cp auth.example.json auth.jsonOn PowerShell:
Copy-Item auth.example.json auth.jsonThen edit auth.json and set the provider, model, API base URL, and API key you want to use.
Supported providers:
anthropicopenai-compatibleollama
For openai-compatible endpoints, OpenTower can auto-select a chat-capable model when model is omitted and the provider exposes /models.
Useful checks:
python -m opentower_cli auth
python -m opentower_cli provider-statusRead-only inspection:
python -m opentower_cli "show disk usage"
python -m opentower_cli "search for database in /etc"
python -m opentower_cli "check sshd service status"
python -m opentower_cli "show load average"
python -m opentower_cli "tail the latest syslog log"Confirmation-gated requests:
python -m opentower_cli "create user dev01"
python -m opentower_cli "add user dev01 to docker group"
python -m opentower_cli "chmod 777 /tmp/demo"Current local verification for the 2026-04-26 snapshot:
python -m pytest -q->95 passedpython scripts/run_nl_eval.py --fixture-profile extended->2009/2009 passedpython scripts/run_nl_eval.py --fixture-profile core->416/416 passedpython scripts/run_nl_eval.py --fixture-profile model->228/228 passedpython scripts/run_nl_eval.py --fixture-profile model --with-model --limit 20->20/20 passedpython scripts/run_wsl_smoke.py->10/10 passed- a local rounded WSL real-execution report derived from a completed 543-case run reached
499/500 passed
The NL evaluation corpus is layered into:
extendedcoremodel
--with-model keeps the same replay harness but enables the configured normalizer and fallback model chain.
- Commit
auth.example.json, notauth.json. - Runtime outputs under
production/are local artifacts unless you intentionally want to version them. - Evaluation corpus expansion is PR-able as test coverage. New runtime behavior should still land through reviewed parser, normalizer, fallback, planner, or safety changes.
- Chinese documentation lives in README_CN.md.
- The judge-facing overview lives in 比赛版设计说明文档.md.