Skip to content

DO RPC ReadableStream<T> with non-Uint8Array chunks fails with confusing "Network connection lost" runtime error #6675

@threepointone

Description

@threepointone

Summary

When a Durable Object RPC method returns ReadableStream<{...}> (object chunks instead of Uint8Array), the consumer's first reader.read() throws Error: Network connection lost. before any data flows. The DO-side start(controller) callback never runs.

The constraint itself is fine — @cloudflare/workers-types's Rpc.Stubable union already restricts ReadableStream over RPC to ReadableStream<Uint8Array> (experimental/index.d.ts ~line 14122), so TypeScript narrows non-byte stream returns to never. The issue is the runtime error message when the type system is bypassed (via any, explicit casts, or framework wrappers whose stub typing isn't Stub<T>-narrowed). "Network connection lost." doesn't tell you the actual constraint that was violated, and "no network is even involved here" makes it actively misleading to debug.

Repro

A self-contained repro (~70 LOC) is at https://github.com/cloudflare/workerd-rpc-object-stream-repro (or inline below). Versions tested:

  • wrangler 4.85.0
  • workerd 1.20260424.1
  • @cloudflare/workers-types 4.20260424.1
  • compatibility_date 2026-04-15

src/index.ts

import { DurableObject } from "cloudflare:workers";

interface Env {
  STREAM_PROVIDER: DurableObjectNamespace<StreamProvider>;
}

export class StreamProvider extends DurableObject<Env> {
  /** ReadableStream of Uint8Array chunks — works as expected. */
  streamBytes(): ReadableStream<Uint8Array> {
    const encoder = new TextEncoder();
    return new ReadableStream<Uint8Array>({
      async start(controller) {
        for (let n = 0; n < 5; n++) {
          controller.enqueue(encoder.encode(`chunk-${n}\n`));
          await new Promise((r) => setTimeout(r, 50));
        }
        controller.close();
      }
    });
  }

  /**
   * ReadableStream of plain object chunks. Chunks are trivially
   * structured-clonable (numbers + strings only). The consumer's
   * first reader.read() throws "Network connection lost" — and
   * start(controller) above NEVER runs on the DO side.
   */
  streamObjects(): ReadableStream<{ n: number; tag: string }> {
    return new ReadableStream<{ n: number; tag: string }>({
      async start(controller) {
        for (let n = 0; n < 5; n++) {
          controller.enqueue({ n, tag: `chunk-${n}` });
          await new Promise((r) => setTimeout(r, 50));
        }
        controller.close();
      }
    });
  }
}

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);
    const mode = url.searchParams.get("mode") ?? "bytes";
    const stub = env.STREAM_PROVIDER.get(env.STREAM_PROVIDER.idFromName("repro"));
    const t0 = Date.now();

    try {
      if (mode === "objects") {
        // TS correctly narrows the return to `Promise<never>` because
        // Rpc.Stubable only allows ReadableStream<Uint8Array>. The
        // cast below is what real code hits when going through `any`
        // or via a framework wrapper that returns InstanceType<T>
        // instead of Stub<T> (e.g. cloudflare/agents' subAgent()).
        const stream = (await (
          stub.streamObjects as unknown as () => Promise<
            ReadableStream<{ n: number; tag: string }>
          >
        )()) as ReadableStream<{ n: number; tag: string }>;
        const reader = stream.getReader();
        const chunks: unknown[] = [];
        while (true) {
          const { done, value } = await reader.read();
          if (done) break;
          chunks.push(value);
        }
        return Response.json({ mode, ok: true, elapsedMs: Date.now() - t0, chunkCount: chunks.length, chunks });
      }
      // mode === "bytes"
      const stream = await stub.streamBytes();
      const reader = stream.getReader();
      const decoder = new TextDecoder();
      const chunks: string[] = [];
      while (true) {
        const { done, value } = await reader.read();
        if (done) break;
        chunks.push(decoder.decode(value));
      }
      return Response.json({ mode: "bytes", ok: true, elapsedMs: Date.now() - t0, chunkCount: chunks.length, chunks });
    } catch (err) {
      const e = err as Error;
      return Response.json({ mode, ok: false, elapsedMs: Date.now() - t0, errorName: e?.name, errorMessage: e?.message, errorStack: e?.stack }, { status: 500 });
    }
  }
} satisfies ExportedHandler<Env>;

wrangler.jsonc

{
  "name": "workerd-rpc-object-stream-repro",
  "main": "src/index.ts",
  "compatibility_date": "2026-04-15",
  "compatibility_flags": ["nodejs_compat"],
  "durable_objects": {
    "bindings": [{ "class_name": "StreamProvider", "name": "STREAM_PROVIDER" }]
  },
  "migrations": [
    { "tag": "v1", "new_sqlite_classes": ["StreamProvider"] }
  ]
}

Run

npm install
npx wrangler dev --port 8799
# in another shell:
curl -s 'http://127.0.0.1:8799/?mode=bytes'   | jq .
curl -s 'http://127.0.0.1:8799/?mode=objects' | jq .

Actual output

// GET /?mode=bytes — works
{
  "mode": "bytes",
  "ok": true,
  "elapsedMs": 256,
  "chunkCount": 5,
  "chunks": ["chunk-0\n", "chunk-1\n", "chunk-2\n", "chunk-3\n", "chunk-4\n"]
}

// GET /?mode=objects — fails immediately (elapsedMs: 0) before any chunk flows
{
  "mode": "objects",
  "ok": false,
  "elapsedMs": 0,
  "errorName": "Error",
  "errorMessage": "Network connection lost.",
  "errorStack": "Error: Network connection lost.\n    at async Object.fetch (.../index.js:63:35)\n    ..."
}

elapsedMs: 0 is the smoking gun — the failure happens synchronously, before any pull/start lifecycle on either side.

Expected behavior

Either of:

  1. (Strongly preferred, low cost) Better runtime error. Surface a descriptive error like "DO RPC ReadableStream chunks must be Uint8Array; got object chunk { n, tag }" (or even just "DO RPC ReadableStream only supports Uint8Array chunks") instead of the generic "Network connection lost.". Match the runtime to the constraint already encoded in the public type definitions. This would have saved hours of debugging in our case — "Network connection lost" sent us looking for I/O timeouts, eviction, idle DOs, alarms, and other red herrings.

  2. (Larger, optional) Support object chunks via structured-clone. The chunks in this repro are trivially clonable. If workerd already does structured-clone on RPC arguments and returns, extending the stream-bridge to lower clonable chunks to a wire format would remove the constraint entirely. The type definitions would need to relax Stubable to allow ReadableStream<StructuredCloneable>.

(1) alone is sufficient and would close the silent-failure-mode hazard for everyone. (2) is optional ergonomic improvement on top.

Why this matters in practice

Three real-world paths bypass the type narrowing and hit the runtime error:

  • any casts in untyped projects or rapid prototyping.
  • Framework wrappers whose stub typing returns InstanceType<T> instead of Stub<T>. The Cloudflare Agents framework's subAgent(Cls, name) is one such case — types come through as the helper class directly, so Rpc.Stubable constraints don't fire at the call site.
  • Porting code from non-RPC contexts where ReadableStream<T> works for any T (e.g., in-process pipelines, fetch response bodies in the same isolate).

Surfaced this from cloudflare/agents examples/agents-as-tools where a parent agent reads helper-event frames over DO RPC. With object chunks, the helper's start(controller) callback never ran and the parent's reader threw "Network connection lost" before any data flowed. Switching to Uint8Array (NDJSON-encoded) immediately fixed it. The fix took 30 seconds once we knew what was going on; getting there from the error message took meaningfully longer than it should have.

Related

  • Existing open issue cloudflare/workers-sdk#11071 describes a similar "Network connection lost" symptom from DO RPC ReadableStream, but on the cancel path with byte streams. Different scenario, same opaque error message — points at a broader theme of "Network connection lost" being the catch-all error for several distinct DO RPC stream lifecycle failures.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions