Skip to content

Jonathan-Rowles/enactod

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

enactod

CI

Actor based LLM agents for the Odin programming language.

enactod is an LLM agent framework built on actod, an actor runtime. Agents, tools, rate limiters, sub agent pools, and trace sinks are all actors. They compose by name, are supervised by actod, and work local or distributed without changing calling code.

Why actors for agentic work?

A real agent juggles long lived conversational state, parallel tool calls, sub agent fan out, provider rate limits shared across many agents, cross node inference, and a messy failure story (timeouts, 429s, crashed tools, malformed outputs). Writing that with futures, mutexes, and channels means reinventing supervision, message routing, and addressing by hand. Actors give you the primitives for free:

  • Isolation and supervision. Agent, tool, and sub agent crashes stay local. restart_policy declares intent up front, so you don't reanswer "catch here or let it propagate?" at every call site.
  • Location transparency. Sub agents, tools, rate limiters, and trace sinks can live in the same process, on a different worker thread, or on a different node, without changing calling code. Remote inference becomes a naming decision.
  • Natural concurrency. Parallel tool calls, sub agent pools, N in-flight requests across many agents. The agent actor owns its own phase and correlates replies by caller PID, no lock on "is the agent idle?".

enactod uses the actor model as the composition primitive. The full actod runtime (supervision, registry, pub/sub, timers, networking) is re-exported through enact when you need it.

Getting Started

Installation

# Add as a submodule or clone into your project
git clone --recurse-submodules https://github.com/Jonathan-Rowles/enactod

What's included

Feature Documentation
Agent 01 agent
Session 02 session
Tools 03 tools
Providers & Routing 04 providers & routing
Streaming Events 05 streaming events
Rate Limiting 06 rate limiting
Text Store 07 text store
Remote Agents 08 remote agents — working example: example/chat
Sub Agents 09 sub agents
Prompt Caching 10 prompt caching
Ollama 11 ollama
Tracing 12 tracing — working example: example/trace_otlp
Message Types 13 message types

Minimal application

import enact "enactod"

Client :: struct { session: enact.Session }

client_behaviour := enact.Actor_Behaviour(Client) {
    init = proc(d: ^Client) {
        enact.session_send(&d.session, "Hello")
    },
    handle_message = proc(d: ^Client, from: enact.PID, msg: any) {
        if r, ok := msg.(enact.Agent_Response); ok {
            fmt.println(enact.resolve(r.content))
            enact.self_terminate()
        }
    },
}

spawn_client :: proc(_: string, _: enact.PID) -> (enact.PID, bool) {
    return enact.spawn_child("client", Client{session = enact.make_session("demo")}, client_behaviour)
}

spawn_demo_agent :: proc(_: string, _: enact.PID) -> (enact.PID, bool) {
    return enact.spawn_agent("demo", demo_config)
}

main :: proc() {
    provider := enact.make_provider("anthropic", "https://api.anthropic.com", api_key, .ANTHROPIC)
    demo_config = enact.make_agent_config(
        system_prompt = "You are a helpful assistant.",
        provider      = provider,
        model         = .Claude_Sonnet_4_5,
    )

    enact.NODE_INIT("my-app", enact.make_node_config(
        actor_config = enact.make_actor_config(
            children = enact.make_children(spawn_demo_agent, spawn_client),
        ),
    ))
    enact.await_signal()
}

See docs/00_getting-started.md for a runnable walkthrough.

UI as a blocking child

If your program is a UI (TUI, CLI, game loop) that owns the main thread, pass the UI spawn as blocking_child instead of calling await_signal. NODE_INIT runs the UI spawn on the calling thread and returns only when that actor terminates. Main then calls SHUTDOWN_NODE explicitly.

main :: proc() {
    agents, ui, log_level := agent.setup()

    enact.NODE_INIT("coderson", enact.make_node_config(
        actor_config = enact.make_actor_config(
            children = agents,
            logging  = enact.make_log_config(level = log_level),
        ),
        blocking_child = ui, // UI spawn proc. Runs on main, pins the thread.
    ))
    enact.SHUTDOWN_NODE()
}

Shutdown sequence:

  1. User triggers exit inside the UI (quit command, EOF, window close).
  2. An inner actor detects the exit condition and calls enact.terminate_actor(ui_pid, .SHUTDOWN). A common pattern is a stdin reader spawned as a dedicated OS thread child of the UI. When read() returns the quit token or EOF, it terminates its parent.
  3. The UI actor's loop returns, NODE_INIT returns on main.
  4. Main calls SHUTDOWN_NODE() to drain the worker pool, destroy actor arenas, and free curl.

Do not call await_signal() in this pattern. NODE_INIT already owns main's thread, and SHUTDOWN_NODE is your tear down call.

Working example: example/chat/cli — a remote CLI client using blocking_child with a dedicated-OS-thread stdin reader.

The facade

The public API lives in one file, enactod.odin. Import once.

import enact "enactod"

You get the full surface: agents, sessions, tools, providers, messages, tracing, plus the actod runtime re-exported (spawn, send, set_timer, subscribe_type, etc.). src/ is the implementation.

Tool dispatch

tools := []enact.Tool {
    // Runs on the agent actor. Zero spawn, must be pure.
    enact.function_tool({name = "get_time", ...}, get_time_impl),

    // One actor per call, self terminates. Good for blocking I/O.
    enact.ephemeral_tool({name = "fetch_url", ...}, fetch_impl),

    // Lazy spawned, long lived. `persistent_tool_actor` takes a custom actor.
    enact.persistent_tool_actor({name = "notepad", ...}, notepad_spawn),

    // Delegates to another agent. `pool_size > 1` fans out across N.
    enact.sub_agent_tool({name = "research", ...}, &research_config, pool_size = 4),
}

Streaming events

case enact.Agent_Event:
    switch msg.kind {
    case .TEXT_DELTA:       fmt.printf("%s", enact.resolve(msg.detail))
    case .THINKING_DELTA:   fmt.printf("%s", enact.resolve(msg.detail))
    case .TOOL_CALL_START:  fmt.printfln("calling %s(%s)",
                                enact.resolve(msg.subject), enact.resolve(msg.detail))
    case .TOOL_CALL_DONE, .LLM_CALL_START, .LLM_CALL_DONE, .THINKING_DONE:
    }

subject carries the "who" (tool name, worker name); detail carries the "what" (delta, content, arguments, result).

Runtime routing

// Swap models mid conversation.
enact.agent_set_route("demo", haiku_provider, .Claude_Haiku_4_5)

// Or push Set_Route from a router actor, local or remote.
enact.send_to("agent:demo", "node-a", enact.Set_Route{
    provider  = haiku_provider,
    model_str = "claude-haiku-4-5-20251001",
})

// Revert to the config time route.
enact.agent_clear_route("demo")

Cross node session

// Server
enact.NODE_INIT("agent-server", enact.make_node_config(
    network = enact.make_network_config(port = 9100),
    actor_config = enact.make_actor_config(children = enact.make_children(spawn_gateway)),
))

// Client
enact.NODE_INIT("my-app", enact.make_node_config(network = enact.make_network_config()))
enact.register_node("agent-server",
    net.Endpoint{address = net.IP4_Address{127, 0, 0, 1}, port = 9100},
    .TCP_Custom_Protocol,
)
session := enact.make_session("demo", "agent-server")
enact.session_send(&session, "Hello from remote")

enact.send(pid, msg) and enact.send_to(name, node, msg) pick the local or remote path automatically based on the target's PID.

Multi user server (gateway pattern)

One agent per connected user, spawned on demand by a gateway actor. Client asks to open a session, gets back an agent name, talks to that agent for the rest of the connection, and asks the gateway to tear it down on close.

// Server: gateway spawns a fresh agent per Session_Create and cleans up per Session_Destroy.
gateway_handle_message :: proc(d: ^Gateway_State, from: enact.PID, content: any) {
    switch msg in content {
    case enact.Session_Create:
        d.next_id += 1
        name := strings.clone(fmt.tprintf("session-%d", d.next_id))
        enact.spawn_agent(name, d.config)
        enact.send(from, enact.Session_Created{agent_name = name})
        // ... track (name, from) for cleanup
    case enact.Session_Destroy:
        enact.destroy_agent(msg.agent_name)
    }
}

Each session gets its own chat history, worker pool, rate limiter, and tool actors. One user's conversation doesn't touch another user's state.

Working example: example/chatserver runs the gateway, cli is a remote client. Run two CLI instances against one server to see two live sessions.

WebSocket edge actor

The gateway pattern is transport agnostic. For a browser client, the edge is a WebSocket actor instead of a TCP session. ws_odin's callbacks fire on an I/O thread; the actor model handles this cleanly: callbacks marshal every event into an enact.send_by_name to the per connection actor, which runs its handler on its own worker thread.

// I/O thread → actor boundary messages. Register once.
Incoming_Frame :: struct { text: string }
WS_Closed      :: struct { code: u16, reason: string }

ws_callbacks := ws.Server_Callbacks{
    handle_message = proc(c: ^ws.Server_Connection, op: ws.Opcode, data: []byte) {
        user := (^WS_User)(c.user_data)
        enact.send_by_name(user.actor_name,
            Incoming_Frame{text = strings.clone(string(data))})
    },
    on_disconnect = proc(c: ^ws.Server_Connection, code: ws.Close_Code, reason: string) {
        user := (^WS_User)(c.user_data)
        enact.send_by_name(user.actor_name,
            WS_Closed{code = u16(code), reason = strings.clone(reason)})
    },
}

Inside the per connection actor: Session_Created captures the agent name, Incoming_Frame becomes session_send, Agent_Event.TEXT_DELTA and Agent_Response become ws.server_send_text, WS_Closed triggers Session_Destroy + self_terminate. The agents and tools don't know a WebSocket exists. Full walkthrough in docs/08_remote-agents.md.


Memory model

Three allocators, one rule for when to use each:

  • Per agent arena. Transport bytes that flow between actors in one agent's subtree: LLM payloads, tool arguments, tool results, stream chunks, event text. Created at spawn_agent, auto reset after every Agent_Response. Sub agents inherit the parent's arena.
  • Actor arena. Per actor working memory that dies with the actor: parser buffers, stream accumulators, chat history (owned strings, survive arena reset), worker/agent names, proxy routes, ingress state. Managed automatically by actod.
  • Heap. User owned configuration (Agent_Config, Provider_Config, Tool_Def, static prompts). Lives for the lifetime of your program.

Rule of thumb: if the string flows through a message, put it in a Text. If it's spawn time infrastructure, clone into the actor arena. If it's compile time config, leave it where the user puts it.

See docs/07_text-store.md for the full walkthrough.


Configuration

enact.make_agent_config(
    system_prompt           = "...",
    provider                = provider,               // default provider
    model                   = .Claude_Sonnet_4_5,     // default model
    tools                   = tools,
    children                = nil,                    // []SPAWN of co spawned siblings
    worker_count            = 2,                      // parallel LLM HTTP workers
    max_turns               = 10,                     // tool calling budget per request
    max_tool_calls_per_turn = 20,                     // safety cap per turn
    max_tokens              = 4096,
    temperature             = 0.7,
    thinking_budget         = nil,                    // Maybe(int); Anthropic min 1024, Gemini 0=off/-1=dynamic/>0=fixed
    timeout                 = 60 * time.Second,
    tool_timeout            = 30 * time.Second,
    stream                  = false,
    forward_events          = false,
    forward_thinking        = true,
    enable_rate_limiting    = true,
    cache_mode              = .NONE,                  // .EPHEMERAL for prompt caching (Anthropic)
    tool_continuation       = "",                     // optional nudge after tool results
    validate_tool_args      = true,                   // compile input_schema once, validate every call
    accumulate_history      = true,                   // false = stateless per request; true = ongoing conversation
    trace_sink              = {},                     // see dev_trace_sink / function_trace_sink / custom_trace_sink
)

Minimum required for a working agent: provider + model. Dynamic routing is a runtime concern: send Set_Route from a router actor.

TODO

  • Eval harness. Run prompt / agent suites with assertions and metrics against the stub provider or a live one.
  • MCP tool. Wrap MCP server methods as enactod Tools so agents can call external MCP services.
  • Per turn token usage events (currently only final Agent_Response carries totals).
  • Vertex / Bedrock provider wrappers.
  • Shared ratelim:<provider> actor agents attach to by name. Sub agent pools and multi agent setups currently race independent limiters toward the same provider's 429. Per agent ratelim:<name> stays as the default.
  • Peer budget actor (max_tokens_total, max_wallclock, cost). max_turns alone misses cheap but endless tool loops and router driven mid conversation model swaps. Budget owns the ledger; agent queries and aborts on breach. Visibility, not supervisor enforcement.
  • Add parent_request_id: Request_ID to Agent_Event (8 bytes). Makes event correlation tractable across sub-sub agent trees. Full span IDs stay on Trace_Event; don't blur the UI/delta stream into the observability stream.