feat: add Hono example for Web Standards runtimes#106
Open
Conversation
s3proxy v4's pure proxy.fetch() returns { stream, status, headers } —
exactly what Web Response wants. examples/hono-basic.ts demonstrates
that on Bun, Cloudflare Workers, Deno, and Node (via @hono/node-server).
Two adapters are needed at the framework boundary (this is the
"Web Standards vs Node-shape" trade-off captured in #104):
1. c.req.raw.headers (Web `Headers`) → Object.fromEntries → Record
for HttpRequest.headers. Single-point conversion at the example;
the library API stays focused on one header shape.
2. proxy.fetch() returns a Node Readable; new Response() wants a Web
ReadableStream. Readable.toWeb() bridges them with backpressure
preserved end-to-end (no buffering).
Picked path (A) from #104: adapt at the example. Path (B) — widen
HttpRequest['headers'] to `Headers | Record` — waits for a second
framework to force it.
devDeps: hono, @hono/node-server. Both pinned within their stable
majors; engines unchanged (Node ≥22.13).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…eline - Add examples/hono-basic.ts to the smoke loop (alongside express, fastify). Same three-check contract: health=200, /index.html=200, missing=404. - After the existing assertions, run N=10 sequential GETs of the known key and report mean/max/p95 latency per example. Visibility-only (no threshold gate) so CI surfaces a regression in any framework's framing path without flaking on network jitter. curl -m 5 already catches catastrophic hangs. - p95 with N=10 collapses to max (no interpolation); commented inline. Local run on this branch shows mean=55-58ms across all three frameworks, confirming the "Hono is theoretically slower on Node" overhead is in the noise vs S3 RTT. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- New "Hono Integration (Web Standards)" section parallel to the Express/Fastify ones, with the full handler shape (Object.fromEntries for headers, Readable.toWeb for the body, Web Response). - Performance note: honest about the two-extra-layers cost on Node via @hono/node-server (IncomingMessage ↔ Web Request/Response), bounded to "unmeasurable for network-bound streaming" with the smoke gate's per-example latency baseline as supporting evidence. Recommends Express/Fastify for sub-ms CPU paths; Hono is the natural fit on Bun/Workers/Deno where the conversion layer doesn't exist. - Hono added to "Framework Compatibility" list with ✅. - examples/hono-basic.ts added to the project-structure listing. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #104.
What
Adds `examples/hono-basic.ts` showing how v4's pure `proxy.fetch()`
drops into a Hono handler that returns a Web `Response`. Targets
Bun, Cloudflare Workers, Deno, and Node (via `@hono/node-server`).
```ts
app.on(['GET', 'HEAD'], '/*', async (c) => {
const url = new URL(c.req.url);
const { stream, status, headers } = await proxy.fetch({
url: url.pathname + url.search,
method: c.req.method,
headers: Object.fromEntries(c.req.raw.headers),
} as HttpRequest);
const body = Readable.toWeb(stream as Readable) as ReadableStream;
return new Response(body, { status, headers });
});
```
The decision (path A vs path B from #104)
Picked path A: adapt `Web Headers → Record` at the example
boundary with `Object.fromEntries`. The library API stays focused
on one header shape. Path B (widening `HttpRequest['headers']` to
`Headers | Record`) waits for a second framework to force it.
Smoke gate enhancement
`scripts/smoke-examples.sh` now also iterates `hono-basic.ts` and
adds a per-example latency baseline (10 sequential GETs of
`/index.html`, reports mean/max/p95). No threshold — visibility only.
Local run on this branch:
```
[smoke] OK: examples/express-basic.ts (... n=10 mean=55ms max=58ms p95=58ms)
[smoke] OK: examples/fastify-basic.ts (... n=10 mean=56ms max=59ms p95=59ms)
[smoke] OK: examples/hono-basic.ts (... n=10 mean=58ms max=80ms p95=80ms)
```
The numbers are within network jitter — confirms the "Hono on Node has
two extra conversion layers" overhead is in the noise vs S3 RTT for
network-bound streaming. README documents this honestly so the
inevitable "but Hono is slower" issue arrives with context.
Performance note (also in README)
Deliberately not in this PR
doesn't ship a Hono Docker variant; the integration tests target
fastify-docker. Adding a Hono-Docker target is its own follow-up.
Test plan
Three commits
feat: add Hono example for Web Standards runtimes (closes #104)
test(smoke): wire Hono example into the gate; per-example latency baseline
docs: document Hono integration; add to framework compatibility list
🤖 Generated with Claude Code