An autonomous system that translates your high-level intent into the grueling work of dozens of specialized AI agents. You focus on the architectural vision; Cortex effortlessly compiles your thoughts into strict specifications, dispatches headless workers, and manages the Git state.
Cortex is built on a philosophy of strict separation of concerns, avoiding the "context collapse" that plagues open-ended AI coding agents. It mirrors a highly functional engineering organization across three layers:
- Repository: gemini-cli-headless
- Function: Raw LLM API execution, subprocess routing, error handling, and JSON parsing. It knows nothing about your project or your intent. It is simply the resilient engine that powers the higher layers.
- Function: Takes a specific, rigid machine contract (
IRQ.md) and deterministically executes it. It manages the Git-Native State Machine, the Amnesia Engine, the Doer/QA loop, and The Glass Dashboard. It handles the "How." - (See the detailed architecture of Layer 1 in the Architecture Docs)
- Function: This layer focuses entirely on the "What" and the "Why." It is a continuous, evolving conversation with the human user (via the Manager persona). The goal is to build and refine the Application Specification Document (ASD).
- The Workflow: You never write Jira tickets or prompt the "Doer" agents directly. You talk to the Layer 2 Architect. The Architect updates the ASD, determines what components are needed, autonomously writes the Layer 1 contracts (
IRQ.md,QAR.md), and dispatches the Layer 1 Factory Floor to build it. If Layer 1 hits a wall, it bubbles the error up to Layer 2, which translates it into a high-level architectural question for you.
The biggest failure mode of current "Agentic AI" (like AutoGPT) is trying to mash Layer 1 and Layer 2 into a single prompt. They try to figure out what you want while simultaneously trying to figure out how to write the regex to parse a specific file. This causes catastrophic context collapse and infinite loops.
By separating Alignment (Layer 2) from Execution (Layer 1), Cortex protects the human from the noise of the terminal, and protects the execution agents from the ambiguity of human brainstorming.
- To the human, Cortex feels like a frictionless, conversational "vibe-code" interface.
- To the agents, Cortex feels like a brutal, deterministic factory floor governed by strict "Space-Grade Engineering Specifications."
Dive deeper into the philosophy and mechanics of the Cortex OS:
- The Prompting Architecture
- System Design: Git as the State Machine
- The "Checkpoint" Concept (Time Travel)
- The Artifact Contract & Reprimand Loops
- The Execution Lifecycle: Components & Artifacts
- The Amnesia Engine & Context Injection
- Central Registry & Telemetry Management
- Specialized QA & The Evolution of Skills
- The Glass: Observability Dashboard
- Node.js CLI: Cortex relies on the official Google Gemini CLI for low-level execution.
npm install -g @google/gemini-cli
- Git: Cortex uses Git as its native state machine. Ensure
gitis installed and configured in your path.
# Clone Cortex
git clone https://github.com/jarek108/Cortex.git
cd Cortex
# Install dependencies (includes gemini-cli-headless)
pip install -r requirements.txtCortex is designed to be used via the Manager CLI.
- Initialize a Project:
python tools/init_project.py /path/to/your/project
- Dispatch a Feature:
Talk to your Interactive Manager (right here). Once you reach the "Certainty Threshold", run the dispatch command:
python tools/implement_feature.py /path/to/your/project --summary "Add a logout button..." - Monitor via The Glass:
Open the live dashboard to watch the agents work in real-time:
python tools/dashboard.py