Skip to content

Conversation

@andreiborza
Copy link
Member

@andreiborza andreiborza commented Nov 14, 2025

The flush timeout was being reset on every incoming log, preventing flushes when logs arrived continuously. Now, the timer starts on the first log and won't get reset, ensuring logs flush within the configured interval.

Fixes #18204, getsentry/sentry-react-native#5378

v9 backport: #18214

The flush timeout was being reset on every incoming log, preventing flushes when
logs arrived continuously. Now, the timer starts on the first log won't get
reset, ensuring logs flush within the configured interval.

Fixes #18204, getsentry/sentry-react-native#5378
@andreiborza andreiborza changed the title fix(core): Fix log flush timeout starvation with continuous logging fix(core): Fix logs and metrics flush timeout starvation with continuous logging Nov 14, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Nov 14, 2025

size-limit report 📦

Path Size % Change Change
@sentry/browser 24.61 kB +0.04% +9 B 🔺
@sentry/browser - with treeshaking flags 23.11 kB +0.05% +10 B 🔺
@sentry/browser (incl. Tracing) 41.27 kB +0.03% +10 B 🔺
@sentry/browser (incl. Tracing, Profiling) 45.54 kB +0.03% +10 B 🔺
@sentry/browser (incl. Tracing, Replay) 79.74 kB +0.02% +10 B 🔺
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 69.41 kB +0.02% +9 B 🔺
@sentry/browser (incl. Tracing, Replay with Canvas) 84.43 kB +0.02% +10 B 🔺
@sentry/browser (incl. Tracing, Replay, Feedback) 96.59 kB +0.01% +9 B 🔺
@sentry/browser (incl. Feedback) 41.28 kB +0.02% +8 B 🔺
@sentry/browser (incl. sendFeedback) 29.29 kB +0.04% +10 B 🔺
@sentry/browser (incl. FeedbackAsync) 34.2 kB +0.03% +8 B 🔺
@sentry/react 26.3 kB +0.04% +10 B 🔺
@sentry/react (incl. Tracing) 43.23 kB +0.03% +10 B 🔺
@sentry/vue 29.09 kB +0.04% +10 B 🔺
@sentry/vue (incl. Tracing) 43.04 kB +0.03% +9 B 🔺
@sentry/svelte 24.62 kB +0.04% +9 B 🔺
CDN Bundle 26.91 kB +0.03% +8 B 🔺
CDN Bundle (incl. Tracing) 41.81 kB +0.02% +7 B 🔺
CDN Bundle (incl. Tracing, Replay) 78.33 kB +0.01% +6 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) 83.81 kB +0.01% +6 B 🔺
CDN Bundle - uncompressed 78.84 kB +0.01% +2 B 🔺
CDN Bundle (incl. Tracing) - uncompressed 124 kB +0.01% +2 B 🔺
CDN Bundle (incl. Tracing, Replay) - uncompressed 240.03 kB +0.01% +2 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 252.79 kB +0.01% +2 B 🔺
@sentry/nextjs (client) 45.35 kB +0.02% +9 B 🔺
@sentry/sveltekit (client) 41.66 kB +0.03% +11 B 🔺
@sentry/node-core 50.87 kB +0.01% +5 B 🔺
@sentry/node 158.09 kB +0.01% +7 B 🔺
@sentry/node - without tracing 92.74 kB +0.01% +6 B 🔺
@sentry/aws-serverless 106.5 kB +0.01% +3 B 🔺

View base workflow run

@github-actions
Copy link
Contributor

github-actions bot commented Nov 14, 2025

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 8,316 - 11,316 -27%
GET With Sentry 1,325 16% 1,654 -20%
GET With Sentry (error only) 5,983 72% 7,684 -22%
POST Baseline 1,139 - 1,188 -4%
POST With Sentry 510 45% 573 -11%
POST With Sentry (error only) 1,018 89% 1,040 -2%
MYSQL Baseline 3,273 - 4,059 -19%
MYSQL With Sentry 416 13% 562 -26%
MYSQL With Sentry (error only) 2,599 79% 3,308 -21%

View base workflow run

flushTimeout = setTimeout(() => {
flushFn(client);
// Note: isTimerActive is reset by the flushHook handler above, not here,
// to avoid race conditions when new items arrive during the flush.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Stuck Timer Halts Automatic Flushing

The isTimerActive flag can get stuck as true when the timer fires but the buffer is empty. This happens because isTimerActive is only reset in the flushHook handler (line 121), but _INTERNAL_flushLogsBuffer returns early without emitting this hook when the buffer is empty. Once stuck, no new timers can start since !isTimerActive evaluates to false, preventing automatic flushing of subsequent logs/metrics until the weight threshold is exceeded.

Fix in Cursor Fix in Web

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For isTimerActive to be set to true, an item must have come in via afterCaptureHook which in turn starts the timeout. Once the buffer flushes isTimerActive will be set back to false via flushHook.

I don't think this scenario can happen.

andreiborza added a commit that referenced this pull request Nov 14, 2025
The flush timeout was being reset on every incoming log, preventing flushes when
logs arrived continuously. Now, the timer starts on the first log won't
get reset, ensuring logs flush within the configured interval.

Backport of: #18211
Copy link

@isaacs isaacs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, this does do what it says :)

Copy link
Member

@chargome chargome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

@andreiborza andreiborza merged commit ad0ce51 into develop Nov 17, 2025
195 checks passed
@andreiborza andreiborza deleted the ab/fix-log-flush-delaying branch November 17, 2025 09:06
andreiborza added a commit that referenced this pull request Nov 17, 2025
The flush timeout was being reset on every incoming log, preventing flushes when
logs arrived continuously. Now, the timer starts on the first log won't
get reset, ensuring logs flush within the configured interval.

Backport of: #18211
andreiborza added a commit that referenced this pull request Nov 17, 2025
…ng (#18214)

The flush timeout was being reset on every incoming log, preventing
flushes when logs arrived continuously. Now, the timer starts on the
first log won't get reset, ensuring logs flush within the configured
interval.

Backport of: #18211
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Logs: Not sent when there are always logs being collected under the flush timeout

4 participants