Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node - fastify
SDK Version
10.47.0
Framework Version
Fastify 5.8.4, Node.js 22.15.1
Link to Sentry event
No response
Reproduction Example/SDK Setup
import * as Sentry from "@sentry/node";
import { nodeProfilingIntegration } from "@sentry/profiling-node";
Sentry.init({
dsn: "...",
tracesSampleRate: 0.1,
integrations: [nodeProfilingIntegration()],
profilesSampleRate: 1.0,
});
Fastify setup:
Sentry.setupFastifyErrorHandler(app);
Steps to Reproduce
- Set up a Fastify v5 app with setupFastifyErrorHandler and a database (pg/Kysely)
- Add route handlers that query and return ~6000 rows
- Run a load test with concurrent HTTP clients using keep-alive connections
- Take 3 heap snapshots over 10 minutes and run memlab find-leaks
Expected Result
Memory from completed request handlers should be freed after the response is sent.
Actual Result
Memory grows ~11 MB/hour in production (ECS, 1820 MB). OOM kill after ~24h.
memlab find-leaks shows 23.6 MB retained by a single request's query results that are never freed. The retention chain:
TCPSocketWrap (alive — HTTP keep-alive)
→ Socket → _httpMessage → ServerResponse [23.6MB]
→ _events.close → [contextWrapper] ← added by Sentry
→ context → callback chain
→ fulfilled → Promise → PromiseReaction
→ Generator (suspended async function) [23.6MB]
→ parameters_and_registers ← local variables (query results)
Sentry's contextWrapper is registered as a listener on ServerResponse._events.close. This event only fires when the underlying socket closes, not when the response finishes. With HTTP/1.1 keep-alive, the socket persists across requests, so every traced request's full async context, including database query results held in async function local variables (V8 Generator parameters_and_registers), is retained until the socket closes.
With nodeProfilingIntegration(), PROFILE_MAP (LRUMap) adds a second retention path that keeps scopes alive even after socket close.
Additional Context
Proposal: Clean up the contextWrapper listener when the response finishes (finish event or Fastify onResponse hook) rather than relying on socket close.
Priority
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it.
Is there an existing issue for this?
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
@sentry/node - fastify
SDK Version
10.47.0
Framework Version
Fastify 5.8.4, Node.js 22.15.1
Link to Sentry event
No response
Reproduction Example/SDK Setup
Steps to Reproduce
Expected Result
Memory from completed request handlers should be freed after the response is sent.
Actual Result
Memory grows ~11 MB/hour in production (ECS, 1820 MB). OOM kill after ~24h.
memlab find-leaks shows 23.6 MB retained by a single request's query results that are never freed. The retention chain:
TCPSocketWrap (alive — HTTP keep-alive)
→ Socket → _httpMessage → ServerResponse [23.6MB]
→ _events.close → [contextWrapper] ← added by Sentry
→ context → callback chain
→ fulfilled → Promise → PromiseReaction
→ Generator (suspended async function) [23.6MB]
→ parameters_and_registers ← local variables (query results)
Sentry's contextWrapper is registered as a listener on ServerResponse._events.close. This event only fires when the underlying socket closes, not when the response finishes. With HTTP/1.1 keep-alive, the socket persists across requests, so every traced request's full async context, including database query results held in async function local variables (V8 Generator parameters_and_registers), is retained until the socket closes.
With nodeProfilingIntegration(), PROFILE_MAP (LRUMap) adds a second retention path that keeps scopes alive even after socket close.
Additional Context
Proposal: Clean up the contextWrapper listener when the response finishes (finish event or Fastify onResponse hook) rather than relying on socket close.
Priority
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding
+1orme too, to help us triage it.