You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Versions in production:agents@0.11.7 · partyserver@0.5.5 · @cloudflare/workers-types@4.20260426.1 · wrangler@^4.83.0 · compatibility_date = "2026-04-01" · compatibility_flags = ["nodejs_compat"]. (Deploy / account / request identifiers redacted; happy to share via Cloudflare support if a reviewer needs to pull internal traces.)
new_sqlite_classes correctly includes both the parent class and every sub-agent class (verified — matches the examples/multi-ai-chat shape documented in the closing comments of #1385).
Summary
PR #1393 ("Migrate facet bootstrap to explicit FacetStartupOptions.id", shipped in agents@0.11.6) replaces the old __ps_name storage-write workaround with the documented facet API shape ctx.facets.get(key, () => ({ class: Cls, id: parentNs.idFromName(name) })), relying on each facet getting its own ctx.id.name. In production (compatibility_date = "2026-04-01", no experimental flag) the very first synchronous read of this.name inside the facet's _cf_initAsFacet body fails with:
Error on server: Error: Cannot perform I/O on behalf of a different
Durable Object. I/O objects (such as streams, request/response bodies,
and others) created in the context of one Durable Object cannot be
accessed from a different Durable Object in the same isolate. This is a
limitation of Cloudflare Workers which allows us to improve overall
performance. (I/O type: Native)
wallTimeMs ≈ 11–16, cpuTimeMs ≈ 5–8 — too short for any user code; the throw happens on the FIRST line of _cf_initAsFacet. The failure is class-agnostic — it reproduces across every sub-agent class we tried, all on the same parent.
The library-side fix (#1393) is correct in intent, but the workerd-side complement is missing in our deploy region: the DurableObjectId passed via FacetStartupOptions.id is materialized in the parent's IoContext, and the runtime is not re-seating it into the facet's IoContext before the facet's _cf_initAsFacet runs. Result: the very property access PR #1393 relies on (this.ctx.id.name) is the one workerd rejects.
Reproduction
Minimal case — an Agent subclass parent that calls subAgent(Sub, name) from inside a tool execute. The parent class is bound as a Durable Object namespace; Sub is in new_sqlite_classes but not bound. The tool body matches the standard sub-agent flow:
stub.configure is the call that arrives at the facet's _cf_initAsFacet and throws — the configure RPC body never runs because the agents-side init step in front of it fails first.
The same call sequence "appeared" to work end-to-end against agents@0.11.5 test rigs that did not actually exercise _cf_initAsFacet's name check; the new check at the head of _cf_initAsFacet makes the existing IoContext-binding bug now visible on every spawn attempt.
parentNs.idFromName(name) is invoked in the parent's IoContext (the ctx and ctx.exports here belong to the parent DO). The DurableObjectId it returns is therefore a Native handle bound to the parent's IoContext. When workerd later instantiates the facet and the facet reads this.ctx.id.name, the IoContext mismatch trips the check.
Live Workers Logs evidence
Three runs of subAgent(Sub, name) against the same parent DO instance, on the same deploy, against three different sub-agent classes (referred to as Sub-A, Sub-B, Sub-C below). All log entries from each run share the same requestId (the RPC invocation), in order. Identifiers obfuscated.
Run
sub-agent class
rpcMethod
wallMs
cpuMs
level
message
1
Sub-A
_cf_initAsFacet
16
8
error
_cf_initAsFacet
1
Sub-A
_cf_initAsFacet
—
—
error
Error on server: Error: Cannot perform I/O on behalf of a different Durable Object … (I/O type: Native)
1
Sub-A
_cf_initAsFacet
—
—
error
Override onError(error) to handle server errors
2
Sub-B
_cf_initAsFacet
11
5
error
(same shape)
3
Sub-C
_cf_initAsFacet
14
5
error
(same shape)
We also reproduced this across two earlier agents@0.11.7 deploys (one prior agents@0.11.5 deploy, one with a post-agents@0.11.6 bump) before doing a full bun.lock + node_modules wipe and a clean reinstall — the failure persists identically across all three deploys. The wall-time of ~11–16 ms with ~5–8 ms CPU rules out any user code path: the throw is on the synchronous this.name access at the head of the RPC. Across runs the error has the same workerd fingerprint discriminator (b5fa2b4f...) — i.e. workerd considers all three to be the same root cause.
Why this contradicts the #1393 release-notes claim
The agents 0.11.6 release notes (release page) state:
"Migrate facet (sub-agent) bootstrap to the documented Cloudflare facet API: pass id: parentNs.idFromName(name) to ctx.facets.get() so the facet has its own ctx.id.name. Drops the __ps_name storage write and setName() bootstrap from _cf_initAsFacet."
"idDurableObjectId | string Optional — The ID the facet sees as its own ctx.id. If omitted, the facet inherits the parent Durable Object's ID."
The doc does NOT specify the IoContext ownership of the resulting ctx.id on the facet's side. Empirically (and judging from the "Native" discriminator in the workerd error message), the DurableObjectId retains its parent-IoContext binding when handed to the facet, so any access on ctx.id.* from inside the facet's RPC methods crashes.
What we expect / minimal fix shape
When workerd instantiates a facet with a parent-supplied FacetStartupOptions.id, the id value visible on the facet's ctx.id should be:
Either a fresh DurableObjectId bound to the facet's own IoContext (preferred — matches the implicit contract that ctx.id is a per-DO Native value).
Or, at minimum, a DurableObjectId whose property access (.name, .toString(), .equals(...)) does NOT trip the cross-IoContext check on the facet side.
Either form makes partyserver's name getter — the very method PR #1393 designed around — work, and unblocks every downstream consumer of subAgent().
Workarounds we considered and why they don't work in JS
Override _cf_initAsFacet in the user's Agent subclass to skip this.name. Insufficient: partyserver's private #ensureInitialized (called from every entry point including alarms, fetches, and websocket handlers) still reads this.ctx.id.name directly to choose between #persistNameFallbackFromCtxId and the legacy storage-hydrate path. We can't override a private method, and the read itself is the throw site, so subclassing name to ignore ctx.id doesn't help.
Downgrade to agents@0.11.5. Reproduces the same throw later in the boot sequence (the facet still ends up holding a parent-bound ctx.id; the only thing that changed in Migrate facet bootstrap to explicit FacetStartupOptions.id #1393 was making the throw fire one stack frame earlier).
Hand-rolled setName(name) from inside _cf_initAsFacet.partyserver's setName reads ctx.id.name BEFORE comparing against the supplied name, so it trips the same check.
We don't see a path to fix this from JS alone without forking partyserver. The fix has to be in workerd (or in the ctx.facets.get(...) runtime implementation).
#1385 (closed by Migrate facet bootstrap to explicit FacetStartupOptions.id #1393): "Migrate facet bootstrap to partyserver setName() (drops direct __ps_name storage write)". The reporter explicitly noted partyserver 0.5.0 made this.name resolve from ctx.id.name natively but that facets get ctx.id.name === undefined by default — the migration plan was to flip facets to ctx.id.name !== undefined via idFromName. This issue is the follow-up showing that flipping ctx.id.name to a non-undefined value introduces the cross-IoContext access regression.
#1330 (merged in 0.11.5): unrelated earlier cross-DO I/O fix that replaced the parent-constructed Request + stub.fetch() handshake with the RPC-based _cf_initAsFacet. Kept here only to head off conflation: that fix targeted a DIFFERENT Native handle (Request), not DurableObjectId.
If a workerd-side IoContext rebind is months away, an interim library workaround in partyserver would be: in name's getter, wrap the this.ctx.id.name read in a try/catch that on Native-type IoContext errors falls through to the existing this.#_name / storage paths. That would let _cf_initAsFacet populate #_name manually (__ps_name storage write), restoring the pre-#1393 behavior under the hood while keeping the documented surface unchanged. Happy to send a PR if that direction makes sense.
If it would help, I can attach the full Workers Logs query JSON, the exact wrangler.toml minus secrets, and a reduced-test-case worker zip — happy to share via Cloudflare support so identifiers stay private.
Versions in production:
agents@0.11.7·partyserver@0.5.5·@cloudflare/workers-types@4.20260426.1·wrangler@^4.83.0·compatibility_date = "2026-04-01"·compatibility_flags = ["nodejs_compat"]. (Deploy / account / request identifiers redacted; happy to share via Cloudflare support if a reviewer needs to pull internal traces.)new_sqlite_classescorrectly includes both the parent class and every sub-agent class (verified — matches theexamples/multi-ai-chatshape documented in the closing comments of #1385).Summary
PR #1393 ("Migrate facet bootstrap to explicit FacetStartupOptions.id", shipped in
agents@0.11.6) replaces the old__ps_namestorage-write workaround with the documented facet API shapectx.facets.get(key, () => ({ class: Cls, id: parentNs.idFromName(name) })), relying on each facet getting its ownctx.id.name. In production (compatibility_date = "2026-04-01", noexperimentalflag) the very first synchronous read ofthis.nameinside the facet's_cf_initAsFacetbody fails with:wallTimeMs ≈ 11–16, cpuTimeMs ≈ 5–8— too short for any user code; the throw happens on the FIRST line of_cf_initAsFacet. The failure is class-agnostic — it reproduces across every sub-agent class we tried, all on the same parent.The library-side fix (#1393) is correct in intent, but the workerd-side complement is missing in our deploy region: the
DurableObjectIdpassed viaFacetStartupOptions.idis materialized in the parent'sIoContext, and the runtime is not re-seating it into the facet'sIoContextbefore the facet's_cf_initAsFacetruns. Result: the very property access PR #1393 relies on (this.ctx.id.name) is the one workerd rejects.Reproduction
Minimal case — an
Agentsubclass parent that callssubAgent(Sub, name)from inside a tool execute. The parent class is bound as a Durable Object namespace;Subis innew_sqlite_classesbut not bound. The tool body matches the standard sub-agent flow:stub.configureis the call that arrives at the facet's_cf_initAsFacetand throws — theconfigureRPC body never runs because theagents-side init step in front of it fails first.The same call sequence "appeared" to work end-to-end against
agents@0.11.5test rigs that did not actually exercise_cf_initAsFacet's name check; the new check at the head of_cf_initAsFacetmakes the existing IoContext-binding bug now visible on every spawn attempt.Stack trace pointers (links to source on
main)The throw site, with GitHub source-side links:
packages/agents/src/index.ts:3745-3775—_cf_initAsFacet:this.nameresolves to partyserver'snamegetter atcloudflare/partykit packages/partyserver/src/index.ts:776-783:The id used for the facet was constructed inside
packages/agents/src/index.ts:3925-4032— specifically L4003-L4007:parentNs.idFromName(name)is invoked in the parent'sIoContext(thectxandctx.exportshere belong to the parent DO). TheDurableObjectIdit returns is therefore a Native handle bound to the parent'sIoContext. When workerd later instantiates the facet and the facet readsthis.ctx.id.name, theIoContextmismatch trips the check.Live Workers Logs evidence
Three runs of
subAgent(Sub, name)against the same parent DO instance, on the same deploy, against three different sub-agent classes (referred to asSub-A,Sub-B,Sub-Cbelow). All log entries from each run share the samerequestId(the RPC invocation), in order. Identifiers obfuscated._cf_initAsFacet_cf_initAsFacet_cf_initAsFacetError on server: Error: Cannot perform I/O on behalf of a different Durable Object … (I/O type: Native)_cf_initAsFacetOverride onError(error) to handle server errors_cf_initAsFacet_cf_initAsFacetWe also reproduced this across two earlier
agents@0.11.7deploys (one prioragents@0.11.5deploy, one with a post-agents@0.11.6bump) before doing a fullbun.lock+node_moduleswipe and a clean reinstall — the failure persists identically across all three deploys. The wall-time of ~11–16 ms with ~5–8 ms CPU rules out any user code path: the throw is on the synchronousthis.nameaccess at the head of the RPC. Across runs the error has the same workerd fingerprint discriminator (b5fa2b4f...) — i.e. workerd considers all three to be the same root cause.Why this contradicts the #1393 release-notes claim
The agents 0.11.6 release notes (release page) state:
The library-side change is correct —
FacetStartupOptions.idIS the documented contract. The Dynamic Workers facet docs at https://developers.cloudflare.com/dynamic-workers/usage/durable-object-facets/ describeid:The doc does NOT specify the
IoContextownership of the resultingctx.idon the facet's side. Empirically (and judging from the "Native" discriminator in the workerd error message), theDurableObjectIdretains its parent-IoContextbinding when handed to the facet, so any access onctx.id.*from inside the facet's RPC methods crashes.What we expect / minimal fix shape
When workerd instantiates a facet with a parent-supplied
FacetStartupOptions.id, theidvalue visible on the facet'sctx.idshould be:DurableObjectIdbound to the facet's ownIoContext(preferred — matches the implicit contract thatctx.idis a per-DO Native value).DurableObjectIdwhose property access (.name,.toString(),.equals(...)) does NOT trip the cross-IoContextcheck on the facet side.Either form makes
partyserver'snamegetter — the very method PR #1393 designed around — work, and unblocks every downstream consumer ofsubAgent().Workarounds we considered and why they don't work in JS
_cf_initAsFacetin the user'sAgentsubclass to skipthis.name. Insufficient:partyserver's private#ensureInitialized(called from every entry point including alarms, fetches, and websocket handlers) still readsthis.ctx.id.namedirectly to choose between#persistNameFallbackFromCtxIdand the legacy storage-hydrate path. We can't override a private method, and the read itself is the throw site, so subclassingnameto ignorectx.iddoesn't help.agents@0.11.5. Reproduces the same throw later in the boot sequence (the facet still ends up holding a parent-boundctx.id; the only thing that changed in Migrate facet bootstrap to explicit FacetStartupOptions.id #1393 was making the throw fire one stack frame earlier).setName(name)from inside_cf_initAsFacet.partyserver'ssetNamereadsctx.id.nameBEFORE comparing against the suppliedname, so it trips the same check.We don't see a path to fix this from JS alone without forking
partyserver. The fix has to be in workerd (or in thectx.facets.get(...)runtime implementation).Related issues / PRs (for reviewers)
idfacet bootstrap that this issue depends on. Closes Migrate facet bootstrap to partyserver setName() (drops direct __ps_name storage write) #1385.this.nameresolve fromctx.id.namenatively but that facets getctx.id.name === undefinedby default — the migration plan was to flip facets toctx.id.name !== undefinedviaidFromName. This issue is the follow-up showing that flippingctx.id.nameto a non-undefined value introduces the cross-IoContextaccess regression.Request+stub.fetch()handshake with the RPC-based_cf_initAsFacet. Kept here only to head off conflation: that fix targeted a DIFFERENT Native handle (Request), notDurableObjectId.I/O type(Request body vs. ourNative/ DurableObjectId).nameresolve fromctx.id.name. The combination of wire up this.ctx.id.name in partyserver #1380 + Migrate facet bootstrap to explicit FacetStartupOptions.id #1393 is what makes this bug observable — pre-wire up this.ctx.id.name in partyserver #1380,partyserver'snamegetter went through storage and never touchedctx.id.Suggested next steps (writer's note, optional)
If a workerd-side
IoContextrebind is months away, an interim library workaround inpartyserverwould be: inname's getter, wrap thethis.ctx.id.nameread in a try/catch that onNative-typeIoContexterrors falls through to the existingthis.#_name/ storage paths. That would let_cf_initAsFacetpopulate#_namemanually (__ps_namestorage write), restoring the pre-#1393 behavior under the hood while keeping the documented surface unchanged. Happy to send a PR if that direction makes sense.If it would help, I can attach the full Workers Logs query JSON, the exact wrangler.toml minus secrets, and a reduced-test-case worker zip — happy to share via Cloudflare support so identifiers stay private.