Skip to content

9-Trinkets/neuron

Repository files navigation

neuron

A composable, provider-agnostic agent execution engine written in Rust.

Neuron runs agent tasks as structured loops: a planner proposes the next step, an evaluator approves or rejects it, and the engine executes tools or completes. Both the planner and the evaluator are pluggable — you can mix LLM-driven planning with symbolic rules in the same workflow.

Crate organisation

Crate Role
neuron-core Contract types, engine loop, evaluator, and extension traits. No LLM or network dependencies.
neuron-llm ModelBackend trait plus concrete adapters (Anthropic, OpenAI-compatible, Gemini).
neuron-planner NeuroPlanner (LLM-backed), SymbolicPlanner (rule-driven), and all built-in planner rules.
neuron-tools Built-in tool implementations and the tool factory.
neuron-session Conversation session persistence (SQLite and file backends).
neuron-synapse Multi-role event-driven workflow runtime built on top of neuron runs.
neuron-app Config loading, backend wiring, and the shared execute entry point used by the CLI and web server.
neuron-web HTTP server exposing neuron runs over a REST API.
neuron-cli Binary entry point — wires all crates together and exposes the run and synapse CLI commands.

Dependency rule: lower layers must never depend on higher ones. neuron-core has zero internal dependencies.

Crate dependencies

graph TD
    CLI["neuron-cli\n(binary)"]
    WEB["neuron-web\n(http server)"]
    APP["neuron-app\n(wiring)"]
    SYNAPSE["neuron-synapse\n(workflow runtime)"]
    SESSION["neuron-session\n(persistence)"]
    LLM["neuron-llm\n(backends)"]
    PLANNER["neuron-planner\n(planners + rules)"]
    TOOLS["neuron-tools\n(core tools)"]
    CORE["neuron-core\n(contract · engine · evaluator)"]
    CLI --> APP
    CLI --> SYNAPSE
    WEB --> APP
    APP --> LLM
    APP --> PLANNER
    APP --> TOOLS
    APP --> SESSION
    APP --> CORE
    SYNAPSE --> CORE
    PLANNER --> LLM
    PLANNER --> CORE
    LLM --> CORE
    TOOLS --> CORE
    SESSION --> CORE
Loading

Schema version

Current: v2alpha1 (defined in neuron-core::engine::SUPPORTED_SCHEMA_VERSION).

Model backends

Backend Struct Covers
Anthropic AnthropicBackend Claude models
OpenAI-compatible OpenAiBackend OpenAI, Ollama, vLLM, LM Studio, and any OpenAI-compatible server via base_url
Google Gemini GeminiBackend Gemini models

All backends implement ModelBackend and are accepted by NeuroPlanner<B>.

Quick start

Build and run a one-shot request:

ANTHROPIC_API_KEY=sk-... ./scripts/run.sh "List all Rust source files in the current directory"

Using an OpenAI-compatible backend:

LLM_PROVIDER=openai OPENAI_API_KEY=sk-... \
  ./scripts/run.sh "List all Rust source files in the current directory"

Using a local server (Ollama / vLLM / LM Studio):

LLM_PROVIDER=openai OPENAI_API_KEY=dummy OPENAI_BASE_URL=http://localhost:11434/v1 \
  ./scripts/run.sh "Summarize this repository"

The script sends a request to neuron-cli run and pretty-prints the NDJSON event stream.

Planners

Neuron ships two planners that implement the same Planner trait:

  • NeuroPlanner — sends a ChatRequest to any ModelBackend and maps the response to a tool call or completion. Use this when the task requires language understanding or open-ended reasoning.
  • SymbolicPlanner — evaluates a list of SymbolicRules in order and returns the first step produced. No model inference. Use this for deterministic routing, threshold checks, or fixed tool calls.

Both are composable: a Synapse workflow can assign a different planner to each role.

Built-in symbolic planner rules

Rule What it does
tool_route Calls a named tool, renders a template from the result, and routes to a configured next role.
input_route Evaluates conditions against the inbound payload and routes to one of two targets.
numeric_threshold Routes based on a numeric value in the payload crossing a threshold.
mcp Calls a tool on a connected MCP server and routes on completion.
noop Passes the input through unchanged (useful as a passthrough or placeholder).

External symbolic rules are also supported: point symbolic_planner_rule_command at any executable and neuron will call it as a subprocess.

Evaluator rules

Every proposed plan step passes through the evaluator before execution. Rules can approve or reject steps and optionally constrain what tools the next step may use.

Built-in evaluator rules

Rule What it does
allowed_tools Rejects tool calls not in the agent profile's declared tool list.
tool_schema_validation Validates required fields and types against the tool's JSON schema before calling it.
redundant_successful_tool_call Rejects a tool call that is identical to one that already succeeded in this session.
planner_token_budget Stops the run when the conversation history approaches the context window limit.
tool_scope Restricts file-system tools to a configured set of root paths.

Synapse: multi-role workflows

Synapse runs a pool of workers that process events and dispatch them to named roles. A workflow is declared as a TOML graph:

[synapse.workflows.my_workflow]
entry_role = "collector"
routing_mode = "event_driven"
graph = """
collector -> analyzer
analyzer -> reporter [when=alert]
analyzer -> complete [when=ok]
reporter -> complete
"""

Each role can use a different planner mode and agent profile. Synapse handles fan-out, fan-in aggregation, retries, and dead-letter routing.

Run a workflow:

neuron synapse run --workflow my_workflow "Check system health"

Run on a recurring schedule:

neuron synapse serve --workflow my_workflow --interval-seconds 30

Tools

Built-in tools registered by the tool factory:

File and shell: read_file, write_file, edit_file, read_lines, list_directory, find_files, glob, grep, search_text, bash

Web: web_fetch, web_search, upload_file

System: read_memory, read_cpu, read_disk, read_load_average, read_uptime

Utility: calculator, mcp_call

To add a tool: implement the Tool trait in neuron-tools, register it in factory.rs, and add its name to the relevant agent_profiles.*.tools list in config.

Configuration

Neuron reads defaults from ~/.neuron/config.toml (override with NEURON_CONFIG_PATH). See neuron.config.toml for a complete sample.

Key sections:

[llm.openai]
model = "qwen2.5:7b"
base_url = "http://localhost:11434"   # Ollama or any OpenAI-compatible server

[llm.retry]
max_retries = 2
base_delay_ms = 250

[agent_profiles.my_profile]
tools = ["read_file", "bash", "web_search"]
evaluator_rules = ["allowed_tools", "tool_schema_validation", "planner_token_budget"]
planner_token_budget_max = 32000

[synapse]
default_workflow = "my_workflow"

[imports]
agent_profiles = ["config/agent_profiles/profiles.toml"]
workflows = ["config/workflows/my_workflow.toml"]

Examples

Example What it shows
examples/lantern System health monitor — symbolic signal collection, threshold-based alerting, and LLM-composed spoken alerts via a desktop avatar.
examples/fireside Chat application — LLM-driven conversation with session persistence and a web frontend.
examples/cinder Terminal RPG — symbolic world rules, a Neuron-backed natural-language parser, and external rule subprocess integration.

Each example ships with a run.sh that downloads a pre-built neuron binary automatically.

Releasing

./scripts/release.sh v0.1.17

This syncs the current source to the public repo, tags the release, and pushes it. CI builds binaries for Linux (amd64/arm64) and macOS (arm64).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors