npx sentinelayer-cli@latest <project-name>
Scaffolds Sentinelayer spec/prompt/guide artifacts and bootstraps SENTINELAYER_TOKEN without manual copy/paste, with optional BYOK mode.
CLI binaries:
sentinelayer-cli(primary)create-sentinelayer(compatibility alias)sentinel(legacy alias)sl(short alias)
- runs an interactive project interview
- opens browser auth at Sentinelayer
/cli-auth - receives approved auth session in terminal
- supports explicit
BYOKmode (skip Sentinelayer browser auth/token bootstrap) - optionally opens GitHub auth (
gh auth login -w) and lets you arrow-select a repo - optionally clones the selected repo into the current folder for in-place feature work
- generates
spec + build guide + execution prompt + omar workflow + todo + handoff prompt - issues bootstrap
SENTINELAYER_TOKENwhen managed auth mode is used - writes token to local
.envwhen managed auth mode is used - optionally injects token to GitHub Actions secret via
gh secret setin managed auth mode - ensures target workspace is a git repo (
git init+originwhen needed)
Initial production scope is intentionally narrow and hardened:
- Omar baseline gate workflows and deterministic local gate checks
- Jules Tanaka deep frontend audits (
sl audit frontend --stream) - Reproducible review/audit artifacts and runtime telemetry
Primary commands in this shipping lane:
sl auth login --api-url https://api.sentinelayer.com
sl scan init --path . --non-interactive
sl omargate deep --path .
sl audit frontend --path ./my-react-app --stream
sl review --diff
sl watch run-events --run-id <run-id>Windows PowerShell note: sl is a built-in alias for Set-Location. Use sentinelayer-cli (or short alias slc) instead.
- Trigger:
npx sentinelayer-cli@latest my-agent-app- Interview prompts (project goal, provider, coding agent, auth mode, depth, audience, project type, optional repo connect).
- If repo connect is enabled:
- choose repo source: current repo, GitHub picker, or manual
owner/repo - optional browser GitHub authorization
- optional clone into local workspace for existing-codebase feature work
- choose repo source: current repo, GitHub picker, or manual
- Browser auth opens automatically in managed auth mode.
- Token + artifacts are generated.
- CLI prints handoff and next command:
npm run sentinel:startUse non-interactive mode to run full scaffolding in automation:
SENTINELAYER_CLI_INTERVIEW_JSON='{"projectName":"demo-app","projectDescription":"Build an autonomous secure code review orchestrator.","aiProvider":"openai","codingAgent":"codex","authMode":"sentinelayer","generationMode":"detailed","audienceLevel":"developer","projectType":"greenfield","techStack":["TypeScript","Node.js"],"features":["auth","scan"],"connectRepo":false,"injectSecret":false}' \
npx sentinelayer-cli@latest demo-app --non-interactive --skip-browser-openInputs for non-interactive mode:
SENTINELAYER_CLI_INTERVIEW_JSON(JSON string)- interview JSON supports
authMode: "sentinelayer" | "byok"(default:sentinelayer) - or
--interview-file <path-to-json> --non-interactiveis required to disable prompts--skip-browser-openavoids launching local browser in headless runs--help/-hprints CLI usage--version/-vprints CLI versionSENTINELAYER_GITHUB_CLONE_BASE_URLoverrides clone base (defaulthttps://github.com)
docs/spec.mddocs/build-guide.mdprompts/execution-prompt.md.github/workflows/omar-gate.ymltasks/todo.mdAGENT_HANDOFF_PROMPT.md(read order + Omar loop + local command matrix + workflow tuning options)- coding-agent config file for selected agent when supported (examples:
CLAUDE.md,.cursorrules,.github/copilot-instructions.md) package.json(addssentinel:start,sentinel:omargate,sentinel:omargate:json,sentinel:audit,sentinel:audit:json,sentinel:persona:*,sentinel:applywhen missing).envwithSENTINELAYER_TOKEN(or API-provided secret name) in managed auth mode
When Advanced options? is enabled:
Auth mode(sentinelayerorbyok)Connect a GitHub repo and inject Actions secret?How should we choose the repo?(current / GitHub picker / manual)- GitHub picker reads all accessible repos via paginated
gh api Clone this repo locally and build directly into it now?Inject SENTINELAYER_TOKEN into GitHub Actions secrets now?(managed auth mode only)- Final review step lets you proceed, restart interview, or cancel cleanly
The CLI validates repo format and secret-name format before injection.
When Clone this repo locally and build directly into it now? is enabled:
- the CLI clones
<owner>/<repo>into./<repo-name>unless current folder already matches that repo - it writes generated docs/prompts/tasks/workflow into that cloned repo
- it extracts a deterministic repo summary and includes it in generation context
- if the repo is empty, scaffolding still proceeds deterministically
- if the target folder already contains a different non-empty repo, CLI fails fast with a clear error
- if the target folder is a git repo without a detectable GitHub
origin, CLI refuses to continue
- browser auth JWT is used in-memory only
- in managed auth mode, CLI stores only bootstrap token in
.env - in managed auth mode, GitHub secret injection uses stdin (
gh secret set ...) and never writes token to command history - in managed auth mode, secret injection is verified with
gh secret list --repo <owner/repo> - API fallback secret name is pinned to
SENTINELAYER_TOKENif server response is invalid - in BYOK mode, no Sentinelayer token is created or injected
For long-running agent/operator workflows, the CLI now supports persistent auth sessions:
sl auth login --api-url https://api.sentinelayer.com --skip-browser-opensl auth statussl auth logoutsl auth sessionssl auth revoke --token-id <token-id>
On Windows PowerShell, run these as sentinelayer-cli auth ... or slc auth ....
Behavior:
- login uses browser approval (
/api/v1/auth/cli/sessions/*) - after approval, CLI mints a long-lived API token (
/api/v1/auth/api-tokens) - session metadata is stored at
~/.sentinelayer/credentials.json - token storage uses OS keyring only when explicitly enabled (
SENTINELAYER_KEYRING_MODE=keyring) andkeytaris installed; file fallback is used otherwise - near-expiry token rotation is automatic on command use for stored sessions
- env/config tokens still take precedence:
SENTINELAYER_TOKEN.sentinelayer.ymlsentinelayerToken
Opt-in to keyring usage:
SENTINELAYER_KEYRING_MODE=keyring(requiresnpm install keytar)
Opt-out of keyring usage (overrides any opt-in):
SENTINELAYER_DISABLE_KEYRING=1
You can stream runtime run events directly from the CLI:
sl watch run-events --run-id <run-id>sl watch runtime --run-id <run-id>(alias)sl watch history(list persisted watch summaries)
Options:
--poll-seconds <seconds>polling interval--max-idle-seconds <seconds>optional idle timeout--output-dir <path>artifact root override--jsonmachine-readable event stream + summary
By default, watch output is persisted to:
.sentinelayer/observability/runtime-watch/<run-id>/events-<timestamp>.ndjson.sentinelayer/observability/runtime-watch/<run-id>/summary-<timestamp>.json
The CLI now includes a low-latency chat command surface:
sl chat ask --prompt "Summarize this diff" --dry-runsl chat ask --prompt "Explain this failure" --provider openai --model gpt-4o
Each call appends reproducible transcript entries to:
.sentinelayer/chat/sessions/<session-id>.jsonl
The default review command now runs a layered deterministic pipeline:
sl review(full workspace mode)sl review --diff(staged + unstaged + untracked git changes)sl review --staged(staged changes only)
Each run writes reproducible artifacts to:
.sentinelayer/reviews/<run-id>/REVIEW_DETERMINISTIC.md.sentinelayer/reviews/<run-id>/REVIEW_DETERMINISTIC.json.sentinelayer/reviews/<run-id>/checks/*.log(static check output)
For compatibility, lightweight scan mode remains available:
sl review scan --mode full|diff|staged.sentinelayer/reports/review-scan-<mode>-<timestamp>.md
The review command can now add budget-governed AI reasoning on top of deterministic findings:
sl review --ai --provider openai --model gpt-5.3-codexsl review --ai --ai-dry-run(no provider call; deterministic synthetic output)sl review --ai --max-cost 1.0 --max-tokens 0 --max-runtime-ms 0 --max-tool-calls 0
AI artifacts are persisted in the same run folder:
.sentinelayer/reviews/<run-id>/REVIEW_AI_PROMPT.txt.sentinelayer/reviews/<run-id>/REVIEW_AI.md.sentinelayer/reviews/<run-id>/REVIEW_AI.json
AI usage, cost, and stop-class telemetry are appended to:
.sentinelayer/cost-history.json.sentinelayer/observability/run-events.jsonl
Every review run now emits reconciled findings:
.sentinelayer/reviews/<run-id>/REVIEW_REPORT.md.sentinelayer/reviews/<run-id>/REVIEW_REPORT.json
Capabilities:
sl review show [--run-id <id>]sl review export --format sarif|json|md|github-annotationssl review accept <finding-id> --run-id <id>sl review reject <finding-id> --run-id <id>sl review defer <finding-id> --run-id <id>
Reconciliation behavior:
- deduplicates deterministic + AI findings by location/message fingerprint
- preserves highest severity finding in each duplicate cluster
- assigns confidence (
100%deterministic, model-derived for AI) - persists HITL decisions in
.sentinelayer/reviews/<run-id>/REVIEW_DECISIONS.json
Reproducibility commands:
sl review replay <run-id>sl review diff <base-run-id> <candidate-run-id>
Run metadata and comparison artifacts:
.sentinelayer/reviews/<run-id>/REVIEW_RUN_CONTEXT.json.sentinelayer/reviews/<run-id>/REVIEW_COMPARISON_<base>_vs_<candidate>.json
The CLI now includes an audit swarm orchestrator with a built-in 13-agent registry:
sl audit --dry-runsl audit --agents security,architecture,testing --max-parallel 3sl audit registrysl audit securitysl audit architecturesl audit testingsl audit performancesl audit compliancesl audit documentationsl audit package --run-id <id>(or omit--run-idto package latest run)sl audit replay <run-id>sl audit diff <base-run-id> <candidate-run-id>sl audit local(legacy compatibility path for/audit)
Artifacts are written to:
.sentinelayer/audits/<run-id>/AUDIT_REPORT.md.sentinelayer/audits/<run-id>/AUDIT_REPORT.json.sentinelayer/audits/<run-id>/agents/<agent-id>.json.sentinelayer/audits/<run-id>/agents/SECURITY_AGENT_REPORT.md(security specialist).sentinelayer/audits/<run-id>/agents/ARCHITECTURE_AGENT_REPORT.md(architecture specialist).sentinelayer/audits/<run-id>/agents/TESTING_AGENT_REPORT.md(testing specialist).sentinelayer/audits/<run-id>/agents/PERFORMANCE_AGENT_REPORT.md(performance specialist).sentinelayer/audits/<run-id>/agents/COMPLIANCE_AGENT_REPORT.md(compliance specialist).sentinelayer/audits/<run-id>/agents/DOCUMENTATION_AGENT_REPORT.md(documentation specialist).sentinelayer/audits/<run-id>/DD_PACKAGE_MANIFEST.json.sentinelayer/audits/<run-id>/DD_FINDINGS_INDEX.json.sentinelayer/audits/<run-id>/DD_EXEC_SUMMARY.md.sentinelayer/audits/<run-id>/AUDIT_COMPARISON_<base>_vs_<candidate>.json
The CLI now includes OMAR-led swarm planning commands for governed long-running runs:
sl swarm registrysl swarm plan --path . --scenario error_event_remediation --agents security,testing,reliability --json
swarm plan outputs deterministic orchestration artifacts (assignments, budgets, and phase graph):
.sentinelayer/swarms/<run-id>/SWARM_PLAN.json.sentinelayer/swarms/<run-id>/SWARM_PLAN.md
Global budgets can be set per run:
--max-cost-usd--max-output-tokens--max-runtime-ms--max-tool-calls--warning-threshold-percent
The swarm runtime loop can now be executed directly from CLI:
sl swarm run --path . --agents security,testing --json(default mock runtime, dry-run)sl swarm run --plan-file .sentinelayer/swarms/<plan-run-id>/SWARM_PLAN.json --engine playwright --execute --start-url https://example.com
Runtime artifacts are persisted under:
.sentinelayer/swarms/<runtime-run-id>/runtime/SWARM_RUNTIME.json.sentinelayer/swarms/<runtime-run-id>/runtime/SWARM_RUNTIME.md.sentinelayer/swarms/<runtime-run-id>/runtime/events.ndjson
Optional Playwright actions can be provided via playbook JSON:
--playbook-file <path>where file contract is{ "actions": [ ... ] }
Swarm runtime now supports a deterministic scenario DSL (.sls):
sl swarm scenario init nightly-smoke --path .sl swarm scenario validate --file .sentinelayer/scenarios/nightly-smoke.slssl swarm run --scenario-file .sentinelayer/scenarios/nightly-smoke.sls --json
DSL commands:
scenario "<id>"start_url "<url>"tag "<value>"action goto "<url>"action click "<selector>"action fill "<selector>" "<text>"action wait <ms>action screenshot "<relative-path>"
The CLI now supports runtime swarm dashboard snapshots and watch streaming:
sl swarm dashboard --run-id <runtime-run-id>sl swarm dashboard --watch --run-id <runtime-run-id> --poll-seconds 2 --max-idle-seconds 20
Machine-readable output:
sl swarm dashboard --jsonsl swarm dashboard --watch --json
Dashboard data includes per-agent status rows, usage counters, stop class, and recent timeline events.
You can package runtime artifacts into a deterministic execution report bundle:
sl swarm report --run-id <runtime-run-id>sl swarm report --json
Report artifacts:
.sentinelayer/swarms/<runtime-run-id>/runtime/SWARM_EXECUTION_REPORT.json.sentinelayer/swarms/<runtime-run-id>/runtime/SWARM_EXECUTION_REPORT.md
The report links runtime usage, stop class, per-agent status summary, recent events, and plan/runtime artifact paths.
The CLI now includes a governed pen-test swarm entrypoint:
sl swarm create --scenario pen-test --pen-test-scenario auth-bypass --target https://app.customer.local --target-id <target-id>sl swarm create --scenario input-validation --target https://app.customer.local --target-id <target-id> --execute
Built-in pen-test scenarios:
auth-bypassrate-limit-probeinput-validationprivilege-escalation
Policy enforcement is strict:
- target must exist in local AIdenID target registry and be
VERIFIED - target must not be frozen/inactive
- target host must match
--target - scenario, methods, and paths must stay within target policy (
allowedScenarios,allowedMethods,allowedPaths)
Pen-test artifacts:
.sentinelayer/swarms/<pentest-run-id>/pentest/REQUEST_PLAN.json.sentinelayer/swarms/<pentest-run-id>/pentest/audit.jsonl(full request/response headers+body).sentinelayer/swarms/<pentest-run-id>/pentest/PENTEST_REPORT.json.sentinelayer/swarms/<pentest-run-id>/pentest/PENTEST_REPORT.md
PENTEST_REPORT findings are keyed to OWASP categories and surface P0-P3 severity summary + blocking status.
Identity security controls now include:
- zero-trust swarm identity manifest per run (
IDENTITY_ISOLATION.json) - cryptographic audit chain on pen-test request logs (
previousEntryHash+entryHash+entryHmac) - crash-safe cleanup contract artifact (
CLEANUP_CONTRACT.json) for post-run squash scheduling - legal-hold guardrails on revoke/revoke-children commands
New identity lifecycle commands:
sl ai identity audit --stale --jsonsl ai identity legal-hold status <identity-id> --jsonsl ai identity kill-all --tags <tag1,tag2> [--execute] --json
kill-all --execute blocks legal-hold identities and marks eligible tagged identities as SQUASHED in local registry with campaign metadata.
The CLI now includes an OMAR daemon lane for deterministic error intake and routed queue generation:
sl daemon error record --service sentinelayer-api --endpoint /v1/runtime/runs --error-code RUNTIME_TIMEOUT --severity P1 --message "runtime timeout"sl daemon error worker --max-events 200 --jsonsl daemon error queue --json
Daemon artifacts:
.sentinelayer/observability/error-daemon/admin-error-stream.ndjson(append-only intake stream).sentinelayer/observability/error-daemon/queue.json(deduped routed queue work items).sentinelayer/observability/error-daemon/worker-state.json(stream cursor + aggregate stats).sentinelayer/observability/error-daemon/intake/intake-*.json(per-event intake snapshots).sentinelayer/observability/error-daemon/runs/error-daemon-run-*.json(worker tick execution evidence)
Queue routing behavior:
- events are fingerprinted from service, endpoint, error code, stack fingerprint, and commit sha
- matching open fingerprints are deduped with
occurrenceCountincrements and severity escalation - worker cursor tracks processed stream offset for deterministic resumability across ticks
Daemon assignment controls now support explicit claim/heartbeat/release/reassign flow with lease tracking:
sl daemon assign claim <work-item-id> --agent maya.markov@sentinelayer.local --lease-ttl-seconds 1800 --stage triage --run-id run_001 --jira-issue-key SL-101sl daemon assign heartbeat <work-item-id> --agent maya.markov@sentinelayer.local --stage analysis --run-id run_002sl daemon assign reassign <work-item-id> --from-agent maya.markov@sentinelayer.local --to-agent mark.rao@sentinelayer.local --stage fixsl daemon assign release <work-item-id> --agent mark.rao@sentinelayer.local --status DONE --reason "fix merged"sl daemon assign list --status DONE --agent mark.rao@sentinelayer.local --json
Ledger artifacts:
.sentinelayer/observability/error-daemon/assignment-ledger.json(current assignment state).sentinelayer/observability/error-daemon/assignment-events.ndjson(claim/heartbeat/reassign/release event history)
Tracked assignment fields include:
workItemIdassignedAgentIdentityleasedAtleaseTtlSecondsleaseExpiresAtstatusstagerunIdjiraIssueKeybudgetSnapshot
Daemon Jira lifecycle commands now support ticket create/start/comment/transition traces tied to work items:
sl daemon jira open <work-item-id> --issue-key-prefix SLsl daemon jira start <work-item-id> --plan "1) reproduce 2) patch 3) verify" --actor maya.markov@sentinelayer.local --assignee maya.markov@sentinelayer.localsl daemon jira comment --work-item-id <work-item-id> --type checkpoint --message "patch applied"sl daemon jira transition --work-item-id <work-item-id> --to DONE --reason "fix merged"sl daemon jira list --status DONE --work-item-id <work-item-id> --json
Lifecycle artifacts:
.sentinelayer/observability/error-daemon/jira-lifecycle.json(issue state, comments, transitions).sentinelayer/observability/error-daemon/jira-events.ndjson(append-only lifecycle event feed)
When an assignment exists for the same work item, Jira issue keys are synced into assignment ledger records for deterministic handoff continuity.
Daemon budget governor commands now enforce hard-limit transitions with quarantine grace and deterministic kill path:
sl daemon budget check <work-item-id> --usage-json '{"tokensUsed":150}' --budget-json '{"maxTokens":100,"quarantineGraceSeconds":30}'sl daemon budget status --work-item-id <work-item-id> --json
Lifecycle states:
WITHIN_BUDGETWARNING_THRESHOLDHARD_LIMIT_QUARANTINEDHARD_LIMIT_SQUASHED
Governor behavior:
- crossing a hard limit transitions the work item into quarantine (
action=QUARANTINE, queue/assignment statusBLOCKED) - if hard-limit usage persists past
quarantineGraceSeconds, governor triggers deterministic kill (action=KILL, queue/assignment statusSQUASHED) - warning thresholds (
warningThresholdPercent) surface near-limit signals without blocking
Budget artifacts:
.sentinelayer/observability/error-daemon/budget-state.json.sentinelayer/observability/error-daemon/budget-events.ndjson.sentinelayer/observability/error-daemon/budget-runs/budget-check-*.json
Daemon operator control commands now provide unified queue/assignment/jira/budget visibility with explicit stop controls:
sl daemon control --jsonsl daemon control snapshot --status ASSIGNED,BLOCKED --agent maya.markov@sentinelayer.local --jsonsl daemon control stop <work-item-id> --mode QUARANTINE --reason "manual triage hold" --confirm --jsonsl daemon control stop <work-item-id> --mode SQUASH --reason "kill switch activated" --confirm --json
Control-plane snapshot fields include:
- per-work-item budget health color (
GREEN,YELLOW,RED) - session timers (
sessionElapsedSeconds,sessionIdleSeconds) - assignment + Jira linkage (
assignedAgentIdentity,assignmentStatus,jiraIssueKey,jiraStatus) - agent roster aggregates (
activeWorkItemCount,blockedCount,squashedCount, longest-session duration)
Operator control artifacts:
.sentinelayer/observability/error-daemon/operator-control-state.json.sentinelayer/observability/error-daemon/operator-events.ndjson.sentinelayer/observability/error-daemon/operator-snapshots/operator-snapshot-*.json
Daemon lineage commands now index reproducibility links across queue, assignment, Jira, budget, and operator artifacts:
sl daemon lineage build --jsonsl daemon lineage list --status ASSIGNED,BLOCKED --jsonsl daemon lineage show <work-item-id> --json
Lineage index fields include:
- work-item links (
agentIdentity,assignmentStatus,loopRunId,jiraIssueKey,budgetLifecycleState) - artifact pointers (queue/ledger/jira/budget/operator state files + per-work-item run artifacts)
- reproducibility run catalogs (
errorDaemonRuns,budgetChecks,operatorSnapshots)
Lineage artifacts:
.sentinelayer/observability/error-daemon/lineage/lineage-index.json.sentinelayer/observability/error-daemon/lineage/lineage-events.ndjson
Daemon hybrid mapping commands now combine deterministic signal routing with on-demand import-graph expansion and semantic scoring:
sl daemon map scope <work-item-id> --max-files 40 --graph-depth 2 --jsonsl daemon map list --work-item-id <work-item-id> --jsonsl daemon map show <work-item-id> --json
Hybrid scope map output includes:
- deterministic seed files from endpoint/error/service token matches
- import-graph overlay (
graphDepth) from seed files - semantic scoring from endpoint/signal token matches in file content
- ranked scoped file set with per-file reasons (
deterministic_path_match,semantic_content_match,import_graph_distance)
Hybrid mapping artifacts:
.sentinelayer/observability/error-daemon/mapping/hybrid-map-index.json.sentinelayer/observability/error-daemon/mapping/hybrid-map-events.ndjson.sentinelayer/observability/error-daemon/mapping/runs/hybrid-map-*.json
Daemon reliability commands now support scheduled synthetic checks and maintenance-billboard automation:
sl daemon reliability run --region us-east-1 --timezone America/New_York --jsonsl daemon reliability run --simulate-failure aidenid_password_reset_flow --jsonsl daemon reliability status --jsonsl daemon maintenance status|on|off --json
Lane behavior:
- failures enqueue deterministic daemon error events (
source=reliability_lane) and execute one worker tick - failures can auto-enable maintenance billboard for operator/HITL visibility
- passing runs can automatically clear reliability-opened maintenance state
- manual maintenance controls remain available (
maintenance on|off) with reason/actor audit trail
Reliability artifacts:
.sentinelayer/observability/error-daemon/reliability/lane-config.json.sentinelayer/observability/error-daemon/reliability/maintenance-billboard.json.sentinelayer/observability/error-daemon/reliability/reliability-events.ndjson.sentinelayer/observability/error-daemon/reliability/runs/reliability-lane-*.json
The CLI now includes deterministic MCP registry commands:
sl mcp schema showsl mcp schema writesl mcp registry init-aidenidsl mcp registry init-aidenid-adaptersl mcp registry validate --file <path>sl mcp registry validate-aidenid-adapter --file <path> [--registry-file <path>]sl mcp server init --id <server-id> --registry-file <path>sl mcp server validate --file <path>sl mcp bridge init-vscode --server-id <server-id> --server-config <path>
Use init-aidenid to scaffold an Anthropic-compatible tool schema wrapper for AIdenID provisioning APIs, then customize transport/auth before runtime wiring.
Use init-aidenid-adapter to scaffold a deterministic AIdenID provisioning API contract (tool binding -> HTTP path/method -> response field mapping) and cross-check it against the registry with validate-aidenid-adapter.
The CLI now includes deterministic plugin/template/policy pack governance commands:
sl plugin init --id <plugin-id> --pack-type plugin|template_pack|policy_pack|hybrid --stage pre_scan|scan|post_scan|reportingsl plugin validate --file <manifest.json>sl plugin listsl plugin order [--stage <stage>](deterministic load-order resolution + cycle detection)
The CLI now includes policy-pack selection commands:
sl policy listsl policy use strict --scope projectsl policy use compliance-soc2 --scope global
Built-in packs: community (default), strict, compliance-soc2, compliance-hipaa.
Policy selection is stored in config (defaultPolicyPack) and applied during scan init / scan validate / scan precheck profile resolution.
The CLI now includes an sl ai surface for AIdenID identity provisioning:
sl ai provision-email --json(dry-run artifact generation)sl ai provision-email --execute --api-key <key> --org-id <id> --project-id <id>(live API call)sl ai identity list --json(list locally tracked identities)sl ai identity show <identity-id> --jsonsl ai identity revoke <identity-id> --execute --api-key <key> --org-id <id> --project-id <id>sl ai identity create-child <parent-identity-id> --event-budget 25 --execute --api-key <key> --org-id <id> --project-id <id>sl ai identity lineage <identity-id> --jsonsl ai identity revoke-children <parent-identity-id> --execute --api-key <key> --org-id <id> --project-id <id>sl ai identity domain create|verify|freeze ...(domain proof + freeze lifecycle controls)sl ai identity target create|verify|show ...(managed target policy/proof controls)sl ai identity site create <identity-id> --domain-id <domain-id> --execute ...sl ai identity site list [--identity-id <identity-id>]sl ai identity events <identity-id> --json(list inbound events with cursor/limit support)sl ai identity latest <identity-id> --json(latest event + extraction metadata)sl ai identity wait-for-otp <identity-id> --min-confidence 0.8 --timeout 60 --json
Identity lifecycle records are persisted to:
.sentinelayer/aidenid/identity-registry.json
Credential env fallbacks for live execution:
AIDENID_API_KEYAIDENID_ORG_IDAIDENID_PROJECT_ID
Extraction responses include deterministic source metadata (RULES vs LLM) and confidence scores.
- Set local token:
echo "SENTINELAYER_TOKEN=<your-token>" >> .env- Inject repo secret:
gh secret set SENTINELAYER_TOKEN --repo <owner/repo>
gh secret list --repo <owner/repo>-
For manual setup details:
https://sentinelayer.com/docs/getting-started/install-workflow -
BYOK mode (no Sentinelayer token):
- keep generated
docs/spec.md,docs/build-guide.md,prompts/execution-prompt.md, andtasks/todo.md - run your coding agent directly with your provider key (
OPENAI_API_KEY/ANTHROPIC_API_KEY/GOOGLE_API_KEY) - generated workflow is a BYOK reminder workflow; wire
SENTINELAYER_TOKENlater to enable Omar Gate action
- keep generated
SENTINELAYER_API_URL(default:https://api.sentinelayer.com)SENTINELAYER_WEB_URL(default:https://sentinelayer.com)SENTINELAYER_DISABLE_KEYRING=1(force file-based credential storage)AIDENID_API_KEY,AIDENID_ORG_ID,AIDENID_PROJECT_ID(used bysl ai provision-email --execute)
The CLI supports layered config resolution:
- global:
~/.sentinelayer/config.yml - project:
.sentinelayer.ymlat repo root - env overrides:
SENTINELAYER_API_URL,SENTINELAYER_WEB_URL,SENTINELAYER_TOKEN,OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY
Commands:
sentinelayer-cli config list --scope resolved --jsonsentinelayer-cli config get apiUrl --scope resolvedsentinelayer-cli config set defaultModelProvider openai --scope projectsentinelayer-cli config edit --scope project
Run deterministic mapping and emit CODEBASE_INGEST.json:
sentinelayer-cli ingest map --path .sentinelayer-cli ingest map --path . --jsonsentinelayer-cli ingest map --path . --output-file artifacts/CODEBASE_INGEST.json
The ingest artifact includes language/LOC breakdown, framework hints, entry points, risk-surface hints, and a bounded file index to support deterministic handoff context.
Generate a local SPEC.md without calling the API:
sentinelayer-cli spec list-templatessentinelayer-cli spec show-template api-servicesentinelayer-cli spec generate --path . --template api-service --description \"Build secure autonomous review orchestration\"sentinelayer-cli spec show --path .sentinelayer-cli spec show --path . --plainsentinelayer-cli spec regenerate --path . --dry-run --jsonsentinelayer-cli spec regenerate --path . --max-diff-lines 120sentinelayer-cli spec regenerate --path . --dry-run --quiet
The generator uses deterministic ingest context plus template architecture/security checklists.
Generate a deterministic base spec, then optionally refine it with a provider model:
sentinelayer-cli spec generate --path . --template api-service --description "Harden auth and release workflows" --aisentinelayer-cli spec generate --path . --ai --provider openai --model gpt-5.3-codex --max-cost 1 --warn-at-percent 80
--ai mode behavior:
- deterministic
SPEC.mddraft is always generated first - AI refinement prompt includes ingest summary + template context + base markdown
- usage is recorded in
.sentinelayer/cost-history.json - telemetry usage/stop events are recorded in
.sentinelayer/observability/run-events.jsonl - budget governors apply (
--max-cost,--max-tokens,--max-runtime-ms,--max-tool-calls,--max-no-progress)
Generate execution prompts directly from SPEC.md:
sentinelayer-cli prompt generate --path . --agent codexsentinelayer-cli prompt preview --path . --agent claude --max-lines 40sentinelayer-cli prompt show --path . --agent codexsentinelayer-cli prompt show --path . --file docs/PROMPT_codex.md --plain
Supported targets: claude, cursor, copilot, codex, generic.
Generate and validate a spec-aligned security workflow:
sentinelayer-cli scan init --path . --non-interactivesentinelayer-cli scan init --path . --has-e2e-tests yes --playwright-mode autosentinelayer-cli scan validate --path . --json
scan init writes .github/workflows/omar-gate.yml and derives:
scan_mode+severity_gatefrom spec risk profileplaywright_modefrom spec signals + optional E2E wizard/flagssbom_modefrom supply-chain/dependency signals in spec
scan validate checks workflow drift against the current spec profile and exits non-zero when mismatched.
AI-assisted pre-scan triage (budgeted + telemetry-instrumented):
sentinelayer-cli scan precheck --path . --provider openai --model gpt-5.3-codexsentinelayer-cli scan precheck --path . --max-cost 0.5 --warn-at-percent 80 --json
scan precheck writes an AI report to .sentinelayer/reports/scan-precheck-*.md (or configured output root), records usage in .sentinelayer/cost-history.json, and emits usage/stop events to .sentinelayer/observability/run-events.jsonl.
Generate phase-by-phase implementation guides from SPEC.md:
sentinelayer-cli guide generate --path .sentinelayer-cli guide generate --path . --output-file docs/BUILD_GUIDE.mdsentinelayer-cli guide show --path .sentinelayer-cli guide show --path . --plain
Export phases as issue-ready payloads:
sentinelayer-cli guide export --path . --format jirasentinelayer-cli guide export --path . --format linearsentinelayer-cli guide export --path . --format github-issues
guide generate writes BUILD_GUIDE.md with per-phase effort estimates, dependencies, implementation tasks, and acceptance criteria. guide export transforms phases into tracker-friendly artifacts.
src/ai/client.js now provides a reusable contract for future AI-enabled commands:
- provider support:
openai,anthropic,google - provider auto-detection from
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY - model resolution defaults per provider with explicit override support
- retry + exponential backoff on retryable statuses (
429,5xx) - non-stream and streaming invocation APIs with provider-normalized text output
The CLI now includes deterministic cost-ledger commands:
sentinelayer-cli cost show --path .sentinelayer-cli cost record --path . --provider openai --model gpt-5.3-codex --input-tokens 1000 --output-tokens 500
Ledger path:
.sentinelayer/cost-history.json(or configured output root)
Budget controls in cost record:
--max-cost <usd>(default1)--max-tokens <count>(default0, disabled)--max-runtime-ms <n>(default0, disabled)--max-tool-calls <n>(default0, disabled)--max-no-progress <count>diminishing-returns guard (default3)--warn-at-percent <n>near-limit warning threshold (default80)
Usage counters tracked per invocation/session:
--duration-ms <n>--tool-calls <n>
Each cost record call now emits observability events to:
.sentinelayer/observability/run-events.jsonl
including normalized usage snapshots and blocking stop-class events when budgets are exceeded.
The CLI now supports a deterministic run-event ledger and stop-class schema:
sentinelayer-cli telemetry show --path .sentinelayer-cli telemetry record --path . --event-type tool_call --tool-calls 1sentinelayer-cli telemetry record --path . --event-type run_stop --stop-class MAX_RUNTIME_MS_EXCEEDED --reason-codes MAX_RUNTIME_MS_EXCEEDED --blocking
Ledger contract:
- file:
.sentinelayer/observability/run-events.jsonl - event types:
run_start,run_step,tool_call,usage,budget_check,run_stop - stop classes:
MAX_COST_EXCEEDED,MAX_OUTPUT_TOKENS_EXCEEDED,DIMINISHING_RETURNS,MAX_RUNTIME_MS_EXCEEDED,MAX_TOOL_CALLS_EXCEEDED,MANUAL_STOP,ERROR,UNKNOWN
- Node
>=20.0 - network access to Sentinelayer API/web
- optional: GitHub CLI (
gh) authenticated for secret injection
This repo includes .github/workflows/release.yml.
Automated version/tag PR flow is handled by .github/workflows/release-please.yml.
Primary gate enforcement is Omar-first:
.github/workflows/omar-gate.yml(Omar Gate) for AppSec findings and merge thresholds.github/workflows/quality-gates.yml(Quality Summary) for deterministic build/test/package checks.github/workflows/attestations.yml(Attestation Summary) for provenance verification
Prerequisites:
- npm package name is available (
sentinelayer-cli) - one publish auth path is configured:
- repository secret
NPM_TOKENwith publish access, or - npm trusted publishing for this repository/tag workflow
- repository secret
Release options:
- Merge to
mainand letRelease Pleaseopen/update the release PR and tag. - Push a tag like
v0.1.1to publish automatically (or via release-please tag creation). - Run
Releasemanually (workflow_dispatch) to validate gates and rollback readiness without publishing. - Tag-triggered publish resolves auth mode at runtime (
NPM_TOKENfirst, otherwise trusted publishing OIDC). - If neither auth mode is available, publish fails closed with an explicit workflow error.
Release publish now enforces tarball checksum-manifest validation and attestation verification bound to .github/workflows/release.yml before npm publish.
Release guardrails now require successful upstream checks on the target commit:
Quality SummaryOmar GateAttestation Summary
npm run verifyThis runs:
- CLI syntax check
- unit tests for core offline generators/config/cost tracking
- end-to-end automated scaffolding tests (mock API + mock
gh) - coverage enforcement (
>=80%lines/functions/statements,>=70%branches for core modules) - package tarball dry-run
Additional test commands:
npm run test:unitnpm run test:e2enpm run test:coverage
The CLI now supports a command tree, while keeping slash-command compatibility:
sentinelayer-cli init <project-name>runs scaffold/auth generation (legacy top-level invocation still works)sentinelayer-cli omargate deep --path <repo>runs a local credential/policy scan and writes.sentinelayer/reports/omargate-deep-*.md(non-zero exit if P1 findings exist)sentinelayer-cli audit [--agents <ids>] [--max-parallel <n>]runs orchestrated audit agents and writes.sentinelayer/audits/<run-id>/AUDIT_REPORT.{md,json}sentinelayer-cli audit registrylists built-in/customized audit-agent registry recordssentinelayer-cli audit securityruns the security specialist agent and writes a dedicatedSECURITY_AGENT_REPORT.mdsentinelayer-cli audit architectureruns the architecture specialist agent and writes a dedicatedARCHITECTURE_AGENT_REPORT.mdsentinelayer-cli audit testingruns the testing specialist agent and writes a dedicatedTESTING_AGENT_REPORT.mdsentinelayer-cli audit performanceruns the performance specialist agent and writes a dedicatedPERFORMANCE_AGENT_REPORT.mdsentinelayer-cli audit complianceruns the compliance specialist agent and writes a dedicatedCOMPLIANCE_AGENT_REPORT.mdsentinelayer-cli audit documentationruns the documentation specialist agent and writes a dedicatedDOCUMENTATION_AGENT_REPORT.mdsentinelayer-cli audit package [--run-id <id>]builds/rebuilds unified DD package artifacts from the requested (or latest) runsentinelayer-cli audit replay <run-id>reruns the same selected agent set and writes a replay comparison artifactsentinelayer-cli audit diff <base-run-id> <candidate-run-id>compares two runs and emits reproducibility drift deltassentinelayer-cli audit local --path <repo>runs legacy readiness + scan audit and writes.sentinelayer/reports/audit-*.mdsentinelayer-cli persona orchestrator --mode <builder|reviewer|hardener> --path <repo>generates mode-specific execution instructions with repo contextsentinelayer-cli apply --plan tasks/todo.md --path <repo>parses plan tasks into deterministic execution order previewsentinelayer-cli auth login|status|logoutmanages persistent CLI sessions for long-running automationsentinelayer-cli auth sessions|revokesupports session inventory and explicit token revocation controlssentinelayer-cli watch run-events --run-id <id>streams runtime events with local artifact persistencesentinelayer-cli daemon error record|worker|queueingests admin errors and routes deterministic daemon queue work itemssentinelayer-cli daemon assign claim|heartbeat|release|reassign|listmanages shared daemon assignment leases and lifecycle statessentinelayer-cli daemon jira open|start|comment|transition|listmanages Jira lifecycle evidence tied to daemon work itemssentinelayer-cli daemon budget check|statusenforces budget warning/quarantine/kill governance with reproducible artifactssentinelayer-cli daemon control|snapshot|stopprovides operator roster snapshots and explicit confirmed stop controlssentinelayer-cli daemon lineage build|list|showindexes reproducible work-item artifact lineage across queue/assignment/jira/budget/operator runssentinelayer-cli daemon map scope|list|showbuilds hybrid deterministic+semantic impact scopes with import-graph overlay for daemon work itemssentinelayer-cli daemon reliability run|statusanddaemon maintenance status|on|offoperate the midnight synthetic lane and maintenance billboard lifecyclesentinelayer-cli mcp schema|registry|server|bridge ...manages MCP registry schema, server configs, and VS Code bridge scaffoldssentinelayer-cli plugin init|validate|list|ordermanages plugin/template/policy packs and deterministic load-order governancesentinelayer-cli policy list|use <pack-id>manages active policy pack selection (community,strict,compliance-soc2,compliance-hipaa, plugin packs)sentinelayer-cli ai provision-emailscaffolds and optionally executes AIdenID identity provisioning requestssentinelayer-cli ai identity list|show|revoke|create-child|lineage|revoke-childrenmanages local identity lifecycle and lineage workflowssentinelayer-cli ai identity domain create|verify|freezemanages domain proof registration and containment controlssentinelayer-cli ai identity target create|verify|showmanages target policy registration and verification controlssentinelayer-cli ai identity site create|listmanages ephemeral callback site provisioning and local lifecycle trackingsentinelayer-cli ai identity events|latest|wait-for-otpmanages extraction/event polling for OTP and verification-link retrievalsentinelayer-cli chat askruns low-latency prompt/response chat with transcript persistencesentinelayer-cli review [path] [--diff|--staged]runs layered deterministic review and writes reproducible artifacts under.sentinelayer/reviews/<run-id>/sentinelayer-cli review [path] [--diff|--staged] [--ai]adds budget-governed AI reasoning over deterministic findingssentinelayer-cli review show|export|accept|reject|defer ...manages reconciled unified reports and HITL adjudicationsentinelayer-cli review replay|diff ...runs reproducibility replay and run-to-run drift comparisonssentinelayer-cli review scan --mode full|diff|stagedruns lightweight deterministic scan mode for compatibility- add
--jsontoomargate,audit,persona orchestrator, orapplyfor machine-readable summaries in CI - add
--output-dir <dir>to local commands to write reports outside the default.sentinelayer/reports
Legacy slash commands are still supported:
sentinelayer-cli /omargate deep --path .sentinel /omargate deep --path .
Roadmap:
- persona orchestrator command set for specialized review/execution modes
Authentication timed out: rerun and approve browser session faster.GitHub CLI not installed: installghor run manual fallback.Invalid repo format: use exactowner/repo.Missing token in workflow: ensure.github/workflows/omar-gate.ymlmapssentinelayer_token: ${{ secrets.SENTINELAYER_TOKEN }}.