Compose gated AI workflows across models. Evidence trails prove they did what you asked.
Note
Coglan is in early development and under active construction. APIs, syntax, and CLI behavior may change between releases. Feedback and contributions welcome.
Website | Quickstart | Language Reference | Cookbook
Coglan is a cognitive programming language and coding agent environment. The .cog format gives teams structured, resumable, and verifiable workflows that run across CLI, TUI, VS Code, CI/CD, and MCP.
The syntax is compact enough to fit in a single system prompt. Models can author and execute .cog workflows directly. An agent encounters a complex task, writes a .cog file, runs it through the full runtime with gates and evidence, and feeds verified results back into the conversation.
load src/**/*.ts as source
scan with model google/gemini-3.1-pro-preview using source:
Find security issues. Return JSON with file, severity, description.
-> findings
check findings:
every finding has file, severity, description
at least 3 findings
otherwise retry scan - "Return strict JSON array with required fields."
challenge findings with model models.fast using findings:
Critique these findings for blind spots and false positives.
-> reviewed
output reviewed as json
Every run produces a complete evidence trail in ~/.coglan/runs/.
Requires Node.js 20 or later.
git clone https://github.com/Arboretum-Projects/coglan.git
cd coglan
npm ci && npm run build
npm link
export OPENROUTER_API_KEY=<your-key># Interactive agent
coglan
# Run a workflow
coglan run examples/quickstart.cog
# Validate before running
coglan validate examples/quickstart.cog
# Headless (CI / scripting)
coglan --headless "Summarize this repo"| Step | Purpose | Model call? |
|---|---|---|
pass |
Cognitive reasoning with optional tool and skill access | Yes |
worker |
Autonomous multi-turn agent (explore, analyze, deliver) | Yes |
challenge |
Adversarial critique of a prior output | Yes |
route |
Model-decided conditional execution paths | Yes |
call |
Deterministic execution (shell, Python, Node, nested .cog) |
No |
transform |
Data reshaping (filter, sort, extract, count, flatten) | No |
parallel |
Concurrent execution of independent steps | Mixed |
Route a large-context model for log ingestion, a frontier model for deep reasoning, and a fast model for synthesis, all in one file. Every step can target a specific model with with model. Two forms:
# Pin to an exact model
scan with model google/gemini-3.1-pro-preview using source:
Find security issues.
-> findings
# Reference a config alias — portable across environments
challenge findings with model models.fast using findings:
Critique for false positives.
-> reviewed
Aliases map to model slots defined in ~/.coglan/config.toml:
[models]
default = "anthropic/claude-sonnet-4.6" # Used when no model is specified
deep = "google/gemini-3.1-pro-preview" # Complex reasoning (auto-detected)
fast = "minimax/minimax-m2.5" # Summaries, compactionResolution order: explicit with model on the step > auto-deep detection > models.default.
This means a single .cog file stays portable. Your team runs the same workflow on different providers by changing config, not code.
triage with model models.deep using raw_logs:
Extract all critical crash events as strict JSON.
-> critical_events
worker deep_research with model anthropic/claude-opus-4.6 using critical_events:
Return a detailed architectural root-cause report.
-> architecture_report
worker executive_summary with model models.fast using architecture_report:
Return a concise JSON brief with summary, root_cause, recommended_fix.
-> final_brief
output final_brief as json
Gates are enforced runtime assertions. When a step finishes, the runtime parses the output, checks your assertions, and holds execution until they pass, retrying with specific feedback when needed.
check report:
report has summary, findings
every finding has severity, file, description
at least 1 findings
otherwise retry scan - "Missing required fields."
Six assertion types: every X has, has_keys, at least N, not empty, matches, count.
- CLI: Interactive agent with tool access, session persistence, conversation branching
- TUI: Ink-based terminal UI with syntax highlighting, panels, and live workflow progress
- Headless:
--headlessfor CI pipelines and scripting (snapshot or JSONL event stream) - VS Code: Syntax highlighting, diagnostics, completions, snippets, console chat, evidence browser
- MCP Server: Expose Coglan as tools to any MCP client
One .cog contract works across all surfaces. Runtime behavior, checkpoints, and gate semantics stay consistent.
Runnable workflows in examples/:
coglan run --fresh examples/quickstart.cog # Pass + gate + challenge
coglan run --fresh examples/worker.cog # Autonomous worker with tools
coglan run --fresh examples/parallel-workers.cog # Dual concurrent workers
coglan run --fresh examples/route-selective.cog # Model-decided routing
coglan run --fresh examples/transform.cog # Deterministic data pipeline
coglan run --fresh examples/ci-review.cog # PR review for CI pipelines
coglan run --fresh examples/ci-docs-check.cog # Detect stale documentation.cog workflows run anywhere the CLI runs. Drop one into a GitHub Actions step, a pre-commit hook, or a scheduled job. See ci-review.cog and ci-docs-check.cog for starting points.
| Doc | What it covers |
|---|---|
| Quickstart | Setup, first run, VS Code integration |
| Language Reference | Full .cog syntax and runtime semantics |
| Cookbook | Practical workflow recipes |
| Budgeting Guide | Context, turn, output, and total budget behavior |
| Language Gotchas | Edge cases and authoring patterns |
| CI Guide | CI integration and quality gate profiles |
| Governance Guide | Gate policies and change control |
VS Code extension docs: extensions/vscode-coglan/README.md
See CONTRIBUTING.md for setup and PR workflow.
- Security issues:
SECURITY.md - Code of conduct:
CODE_OF_CONDUCT.md
If Coglan is useful to you, consider supporting the work: ko-fi.com/arkitecc