Background
This is a follow-up to #516.
In Trinity-Alpha, we tried to adopt the new Optimus app-facing runtime for script generation. The architectural boundary we wanted was still the same as described in #516:
- Skill = prompt / playbook only
- Runtime = execution semantics
- Service layer = app/business code
That split is still the right one.
What we found in practice
Today, the runtime is available, but from a host application perspective it is still exposed mainly through the MCP tool layer.
In our Python application, to call the runtime we currently have to:
- spawn a local Node process (
node .optimus/dist/mcp-server.js)
- speak the MCP stdio transport ourselves
- call
tools/call with runtime tools like:
run_agent
start_agent_run
get_agent_run_status
So although the runtime exists conceptually, the host app is still forced to depend on:
- MCP transport framing
- JSON-RPC tool invocation details
- local Node process management
- transport-specific response parsing
That is not the abstraction boundary we originally wanted.
Why this is still a platform gap
From an application-project point of view, MCP should be an implementation detail, not the primary contract.
A host app should ideally depend on something more native and stable, such as:
- a Python SDK
- a TypeScript/Node SDK
- a local HTTP runtime endpoint
- or even a very small dedicated CLI contract for app-facing runtime requests
but not require every downstream project to implement its own MCP client/bridge.
Concrete integration pain we hit
1. Transport leakage into application code
Our app had to build a custom runtime adapter just to:
- launch the Optimus server process
- initialize the protocol
- call MCP tools
- normalize the envelope back into business code
This means each downstream project will re-implement the same bridge.
2. Runtime result channel is not clean enough for domain-service use
For domain-service style tasks like script generation, we need a clean result channel.
In practice, the returned payload may still be polluted by execution traces / worker-style timeline text unless the runtime path is constrained very carefully.
For application use cases, we want:
- a structured result channel
- a separate logs / trace channel
- predictable output semantics for service-layer consumption
3. Sync semantics need to be cleaner
For app-side run_agent(...), the contract should feel like a true synchronous runtime call.
In our integration attempts, it was still possible to hit confusing states such as:
status = running
- timeout/cancel envelopes
- follow-up polling requirements that leaked into service logic
For application code, this should be more opinionated and easier to consume.
4. Application mode needs stronger isolation from orchestration mode
When an app calls the runtime for a bounded task like:
- script generation
- extraction
- classification
- scoring
it should not accidentally behave like a general worker/orchestrator flow.
We need a stricter application-facing runtime mode where the runtime can be told:
- do not run onboarding workflow
- do not create todos/artifacts unless explicitly requested
- do not emit orchestration chatter into the result channel
- return only the requested domain result + normalized metadata
What we are asking for
Please expose the Agent Runtime as a native application-facing API, with MCP/CLI/ACP hidden underneath.
Desired direction
Something like one of these:
Option A: SDK
const runtime = new AgentRuntime(...)
await runtime.runAgent({
role: "script-writer",
skill: "script-generation",
input: {...},
instructions: "...",
runtime_policy: {...}
})
runtime = AgentRuntime(...)
result = runtime.run_agent(...)
Option B: local HTTP/app runtime endpoint
POST /agent-runtime/run
POST /agent-runtime/start
GET /agent-runtime/runs/:id
POST /agent-runtime/runs/:id/resume
POST /agent-runtime/runs/:id/cancel
Option C: dedicated app runtime CLI contract
A tiny JSON-in / JSON-out CLI specifically for application embedding, so app code does not need to implement MCP transport itself.
Important: this is not a request to move business logic into Skills
We are not asking for Skills to own:
- data loading
- schema validation implementation
- persistence
- DB writes
- application workflow integration
Those still belong in the host application.
We are asking for a better runtime boundary, so app code can call Optimus without transport leakage.
Success criteria
- Host applications do not need to speak MCP transport directly
- MCP / ACP / CLI remain internal transport mechanisms
- Runtime exposes a native app contract (SDK / HTTP / dedicated CLI)
- Result payload is cleanly separated from logs / traces
run_agent sync semantics are predictable for service-layer use
- There is an explicit application-facing mode distinct from orchestration-style worker flows
- Downstream projects can depend on the runtime without inventing their own bridge
Relation to #516
I see this as a narrower, implementation-oriented follow-up to #516:
If helpful, I can also share the Trinity-Alpha adapter shape we had to build on the application side as a concrete example of the gap.
🤖 Created by master-orchestrator via Optimus Spartan Swarm
🤖 Created by master-orchestrator via Optimus Spartan Swarm
Background
This is a follow-up to #516.
In Trinity-Alpha, we tried to adopt the new Optimus app-facing runtime for script generation. The architectural boundary we wanted was still the same as described in #516:
That split is still the right one.
What we found in practice
Today, the runtime is available, but from a host application perspective it is still exposed mainly through the MCP tool layer.
In our Python application, to call the runtime we currently have to:
node .optimus/dist/mcp-server.js)tools/callwith runtime tools like:run_agentstart_agent_runget_agent_run_statusSo although the runtime exists conceptually, the host app is still forced to depend on:
That is not the abstraction boundary we originally wanted.
Why this is still a platform gap
From an application-project point of view, MCP should be an implementation detail, not the primary contract.
A host app should ideally depend on something more native and stable, such as:
but not require every downstream project to implement its own MCP client/bridge.
Concrete integration pain we hit
1. Transport leakage into application code
Our app had to build a custom runtime adapter just to:
This means each downstream project will re-implement the same bridge.
2. Runtime result channel is not clean enough for domain-service use
For domain-service style tasks like script generation, we need a clean result channel.
In practice, the returned payload may still be polluted by execution traces / worker-style timeline text unless the runtime path is constrained very carefully.
For application use cases, we want:
3. Sync semantics need to be cleaner
For app-side
run_agent(...), the contract should feel like a true synchronous runtime call.In our integration attempts, it was still possible to hit confusing states such as:
status = runningFor application code, this should be more opinionated and easier to consume.
4. Application mode needs stronger isolation from orchestration mode
When an app calls the runtime for a bounded task like:
it should not accidentally behave like a general worker/orchestrator flow.
We need a stricter application-facing runtime mode where the runtime can be told:
What we are asking for
Please expose the Agent Runtime as a native application-facing API, with MCP/CLI/ACP hidden underneath.
Desired direction
Something like one of these:
Option A: SDK
Option B: local HTTP/app runtime endpoint
Option C: dedicated app runtime CLI contract
A tiny JSON-in / JSON-out CLI specifically for application embedding, so app code does not need to implement MCP transport itself.
Important: this is not a request to move business logic into Skills
We are not asking for Skills to own:
Those still belong in the host application.
We are asking for a better runtime boundary, so app code can call Optimus without transport leakage.
Success criteria
run_agentsync semantics are predictable for service-layer useRelation to #516
I see this as a narrower, implementation-oriented follow-up to #516:
If helpful, I can also share the Trinity-Alpha adapter shape we had to build on the application side as a concrete example of the gap.
🤖 Created by
master-orchestratorvia Optimus Spartan Swarm🤖 Created by
master-orchestratorvia Optimus Spartan Swarm