You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Podcast audio is currently streamed through the SvelteKit server in src/routes/(main)/(protected)/podcasts/[...key]/+server.ts, which calls getPodcastObject and pipes the S3 response body to the client. Every in-flight playback holds one S3 socket open for as long as the listener is reading bytes — and historically held it open much longer when the listener disconnected, which exhausted the connection pool and surfaced as @smithy/node-http-handler:WARN - socket usage at capacity=50 ....
Permanent fix
Serve audio the way streaming services serve video: let the client fetch directly from S3/CloudFront so Node is out of the data path entirely.
Options to evaluate:
Issue short-lived presigned S3 GET URLs and return them to the client; the <audio> element fetches from S3 directly.
Front the bucket with the existing CloudFront distribution and sign URLs (or use signed cookies) for access control.
Either approach removes Node from the audio data path, freeing the SvelteKit process from per-stream socket pressure and giving us CDN-level caching, range handling, and cheaper egress for free.
Short-term mitigations already in flight
fix: S3 socket exhaustion for podcast delivery #558 — forward the request AbortSignal through getPodcastObject to client.send, so when a listener seeks, closes the tab, or drops off the network, the underlying S3 socket is destroyed within milliseconds instead of being pinned until S3 finishes streaming or TCP fails.
These reduce socket pressure on the proxy path, but the proxy path itself is the architectural bottleneck. Each playback still consumes one socket per stream, and seeks still re-trigger fresh S3 reads. The ceiling stays low until we get Node out of the bytes.
Why this is worth doing properly
While investigating the warning we confirmed:
A single S3Client singleton with the default maxSockets: 50 becomes the bottleneck for the entire process under modest concurrent listening.
HTML5 <audio> is a heavy generator of range thrash — every seek opens a new Range request, multiplying socket churn.
Cache-Control: private, max-age=0, must-revalidate on the proxy route forces a fresh S3 fetch for every play and every revalidation.
Even with abort propagation, slow consumers legitimately occupy sockets for the duration of their playback. There is no proxy-side fix that escapes that ceiling.
Moving to direct delivery solves all four at once: no shared pool, CDN handles range thrash near the edge, CDN caches by default, and slow consumers occupy CDN sockets — not ours.
Suggested next steps
Decide between presigned URL vs. CloudFront signed URL/cookies. CloudFront is the stronger long-term fit since the distribution already exists in dxd-transform-infrastructure and gives us caching + cheaper egress in addition to offloading sockets.
Define how access control plumbs through: short TTL on signed URLs, scoped to authenticated session, optionally tied to user ID.
Migrate the podcast route to return a signed URL instead of a streaming response; update the player to consume it.
Once direct delivery is verified end-to-end, retire getPodcastObject and the [...key]/+server.ts route.
Podcast audio is currently streamed through the SvelteKit server in
src/routes/(main)/(protected)/podcasts/[...key]/+server.ts, which callsgetPodcastObjectand pipes the S3 response body to the client. Every in-flight playback holds one S3 socket open for as long as the listener is reading bytes — and historically held it open much longer when the listener disconnected, which exhausted the connection pool and surfaced as@smithy/node-http-handler:WARN - socket usage at capacity=50 ....Permanent fix
Serve audio the way streaming services serve video: let the client fetch directly from S3/CloudFront so Node is out of the data path entirely.
Options to evaluate:
<audio>element fetches from S3 directly.Either approach removes Node from the audio data path, freeing the SvelteKit process from per-stream socket pressure and giving us CDN-level caching, range handling, and cheaper egress for free.
Short-term mitigations already in flight
AbortSignalthroughgetPodcastObjecttoclient.send, so when a listener seeks, closes the tab, or drops off the network, the underlying S3 socket is destroyed within milliseconds instead of being pinned until S3 finishes streaming or TCP fails.These reduce socket pressure on the proxy path, but the proxy path itself is the architectural bottleneck. Each playback still consumes one socket per stream, and seeks still re-trigger fresh S3 reads. The ceiling stays low until we get Node out of the bytes.
Why this is worth doing properly
While investigating the warning we confirmed:
maxSockets: 50becomes the bottleneck for the entire process under modest concurrent listening.<audio>is a heavy generator of range thrash — every seek opens a new Range request, multiplying socket churn.Cache-Control: private, max-age=0, must-revalidateon the proxy route forces a fresh S3 fetch for every play and every revalidation.Moving to direct delivery solves all four at once: no shared pool, CDN handles range thrash near the edge, CDN caches by default, and slow consumers occupy CDN sockets — not ours.
Suggested next steps
dxd-transform-infrastructureand gives us caching + cheaper egress in addition to offloading sockets.getPodcastObjectand the[...key]/+server.tsroute.