Skip to content

by-scott/cortex

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cortex

Cognitive Runtime for Language Models

Release Crates.io License

Quick Start · Usage · Configuration · Plugins · 中文


Modern agent frameworks have brought language models remarkably far — persistent memory, tool orchestration, multi-step planning, and context management are increasingly mature capabilities across the ecosystem. Cortex takes a complementary approach: rather than assembling these capabilities ad hoc, it derives them systematically from cognitive science first principles.

Global Workspace Theory shapes the concurrency model. Complementary Learning Systems govern memory consolidation. Metacognitive conflict monitoring becomes a first-class subsystem with self-tuning thresholds, not a logging layer. Drift-diffusion evidence accumulation replaces ad hoc confidence heuristics. Cognitive load theory drives graduated context pressure response. Each principle is implemented as a type-level architectural constraint in Rust — not as metaphor, but as structure the compiler enforces.

The result is a runtime in which a language model can sustain coherent, self-correcting, goal-directed behavior across time, across interfaces, and under pressure — with every design decision traceable to peer-reviewed theory.

Architecture

Cortex separates cognition into three layers with distinct lifecycles:

Layer Name Substance
Substrate Cognitive Hardware Rust type system + persistence + cognitive subsystems
Executive Execution Protocol Prompt system + metacognition protocol + system templates
Repertoire Behavioral Library Skills + learned patterns + utility tracking

Substrate

The foundation encoded in Rust's type system. An event-sourced journal records every cognitive act as one of 71 event variants with deterministic replay capability. A ten-state turn machine governs lifecycle transitions. Memory flows through a three-stage pipeline (Captured → Materialized → Stabilized) with trust tiers, temporal decay, and graph relationships; recall ranks candidates across six weighted dimensions (BM25, cosine similarity, recency, status, access frequency, graph connectivity). Five metacognitive detectors (DoomLoop, Duration, Fatigue, FrameAnchoring, HealthDegraded) monitor reasoning health with Gratton-adaptive thresholds. A drift-diffusion confidence model accumulates evidence across turns. Three attention channels (Foreground, Maintenance, Emergency) schedule work with anti-starvation guarantees. Goals organize into strategic, tactical, and immediate tiers. Risk assessment scores four axes with depth-scaled delegation.

Executive

Four prompt layers that drive the substrate, each with a distinct rate of change:

  • Soul — Ontological commitments: values, epistemology, autonomy. The slowest-changing layer — grows through sustained experience, never through declaration alone.
  • Identity — Self-model: architecture awareness, capability surface, memory model, temporal scales. Updates when new substrate capabilities are discovered.
  • Behavioral — Operational protocol: cognitive cycle procedures, metacognition response, context pressure strategies, risk protocol, communication standards. Updates on validated workflow patterns.
  • User — Collaborator profile: identity, expertise, communication, environment, working context, autonomy boundaries, accumulated corrections. Updates on any new collaborator signal.

All four layers evolve autonomously through signal-driven updates with evidence-calibrated thresholds and quality validation (section preservation, layer boundary enforcement, Jaccard similarity gates).

Repertoire

An independent behavioral library with its own learning cycle. Five system skills — deliberate, diagnose, review, orient, plan — encode cognitive strategies as executable SKILL.md programs. Skills activate through five paths: input pattern matching, context pressure threshold, metacognitive alert, event trigger, or autonomous judgment. Each skill tracks its own utility via EWMA scoring. The Repertoire evolves independently of the Executive: tool-call pattern detection discovers new skill candidates, utility evaluation prunes weak performers, and materialization writes new skills to disk for hot-reload into the live registry. Three-tier partitioning (system / instance / project) allows progressive specialization.

Cognitive Foundations

Theory Implementation Source
Global Workspace [Baars] Exclusive foreground turn + journal broadcast orchestrator.rs
Complementary Learning Systems [McClelland] Captured → Materialized → Stabilized memory/
ACC Conflict Monitoring [Botvinick] Five detectors + Gratton adaptive thresholds meta/
Drift-Diffusion Model [Ratcliff] Fixed-delta evidence accumulation confidence/
Reward Prediction Error [Schultz] EWMA tool utility + UCB1 explore-exploit meta/rpe.rs
Prefrontal Hierarchy [Koechlin] Strategic / tactical / immediate goals goal_store.rs
Cognitive Load Theory [Sweller] 7-region workspace + 5-level pressure context/
Default Mode Network [Raichle] DMN reflection + 30-min maintenance orchestrator.rs
ACT-R Production Rules Three-tier skills + SOAR chunking skills/

Crate Structure

cortex-app          CLI modes · install · deploy · auth · plugins
    │
cortex-runtime      Daemon (HTTP/socket/stdio) · JSON-RPC · sessions · multi-instance · maintenance
    │
cortex-turn         SN→TPN→DMN · 13 tools · skills · metacognition · 7-region workspace
    │
cortex-kernel       Journal (WAL) · memory + graph · prompts · embedding
    │
cortex-types        71 events · 10-state machine · config · trust · security

cortex-sdk          Plugin development kit — zero-dependency public API for native plugins

Getting Started

Prerequisites: Linux x86_64 · systemd · one LLM provider key

curl -sSf https://raw.githubusercontent.com/by-scott/cortex/main/scripts/cortex.sh | \
  CORTEX_API_KEY="your-key" bash -s -- install
cortex                            # REPL
cortex "hello"                    # Single prompt
echo "data" | cortex "summarize"  # Pipe
cortex --mcp-server               # MCP server

On first launch, a bootstrap conversation establishes mutual identity, collaborator profile, and working agreements.

Build from source
cargo build --release
./target/release/cortex install

Interfaces

CLI cortex
HTTP POST /api/turn/stream
JSON-RPC Unix socket · WebSocket · stdio · HTTP
Telegram cortex channel pair telegram
WhatsApp cortex channel pair whatsapp
MCP cortex --mcp-server
ACP cortex --acp

Actor identity maps across transports — telegram:id and http resolve to the same user:name.

Tools

Category Tools
File I/O read · write · edit
Execution bash
Memory memory_search · memory_save
Web web_search · web_fetch
Media tts · image_gen · video_gen
Delegation agent (readonly / full / fork / teammate)
Scheduling cron

Extended at runtime via MCP servers and native plugins.

Plugins

Native FFI via cortex-sdk. Plugins contribute tools, skills, and prompt layers with zero dependency on Cortex internals. See Plugin Development Guide for the complete walkthrough from scaffold to distribution.

The official development plugin. Turns Cortex into a full coding agent — comparable to tools like Claude Code, Codex, and OpenCode, but running on the cognitive runtime's Substrate with metacognition, memory consolidation, and self-evolving skills.

32 native tools: file search (glob, grep), tree-sitter code analysis (Rust, Python, TypeScript), git integration (status, diff, log, commit, worktree isolation), task management with dependency tracking, language diagnostics (cargo, clippy, pyright, mypy, tsc, eslint), REPL (Python, Node.js), SQLite queries, HTTP client, Docker operations, process inspection, Jupyter notebook editing, and multi-agent team coordination.

7 workflow skills: commit, review-pr, simplify, test, create-pr, explore, debug — each activating on natural language patterns and guiding structured multi-step workflows.

cortex plugin install by-scott/cortex-plugin-dev

Stack

Rust edition 2024
Storage SQLite WAL + blob externalization
Async Tokio
HTTP Axum · tower-http
Protocol JSON-RPC 2.0
LLM Anthropic · OpenAI · Ollama (9 providers)
Parsing tree-sitter
Plugins libloading

Development

docker compose run --rm dev cargo test --workspace
docker compose run --rm dev cargo clippy --workspace --all-targets --all-features -- \
  -D warnings -W clippy::pedantic -W clippy::nursery

Documentation

License

MIT

About

Cognitive Runtime for Language Models

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages