Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Warning Log for 503 Responses Due to Thread Pool Exhaustion #40486

Merged
merged 1 commit into from
May 7, 2024

Conversation

cescoffier
Copy link
Member

@cescoffier cescoffier commented May 7, 2024

This commit introduces a warning log message when a 503 response is returned due to thread pool exhaustion. Previously, no server-side log was generated in such scenarios.

The log message is categorized as a warning, as this is not an exceptional situation. Despite the thread pool exhaustion, reactive endpoints and virtual threads can still operate successfully.

@maxandersen @franz1981 As discussed yesterday.

This commit introduces a warning log message when a 503 response is returned due to thread pool exhaustion. Previously, no server-side log was generated in such scenarios.

The log message is categorized as a warning, as this is not an exceptional situation. Despite the thread pool exhaustion, reactive endpoints and virtual threads can still operate successfully.
@cescoffier cescoffier requested a review from geoand May 7, 2024 06:47
@franz1981
Copy link
Contributor

franz1981 commented May 7, 2024

Separately: given that jboss threads can swallow the OOM due to thread creation and adding it (as suppressed? @dmlloyd ) to a rejected ex, would be great to have a configuration toggle to check if it has been thrown and rethrow it (or not, just logging) in order to let Exit/CrashOnOMM to work as expected

@cescoffier
Copy link
Member Author

@franz1981 It's unclear if the OOM will be attached (as suppressed or caused) and if what is the right place to handle that.

I agree it needs to be handled and not swallowed, but we would need to observe what happens when it happens (I know it's not going to be easy to observe a nearly dead JVM).

Copy link
Member

@maxandersen maxandersen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 since the only other way to realize this is to enable access log and assume the 503 logging is coming from this.

Makes sense to have this exceptional but possibly temporary issue at least visible.

Copy link

quarkus-bot bot commented May 7, 2024

Status for workflow Quarkus CI

This is the status report for running Quarkus CI on commit e58595b.

✅ The latest workflow run for the pull request has completed successfully.

It should be safe to merge provided you have a look at the other checks in the summary.

You can consult the Develocity build scans.

@dmlloyd
Copy link
Member

dmlloyd commented May 7, 2024

Separately: given that jboss threads can swallow the OOM due to thread creation and adding it (as suppressed? @dmlloyd ) to a rejected ex, would be great to have a configuration toggle to check if it has been thrown and rethrow it (or not, just logging) in order to let Exit/CrashOnOMM to work as expected

Yes it is added as a suppressed exception. I could add different handling into jboss-threads though (for example I could pass OOME to the default thread exception handler) if that would help.

@cescoffier cescoffier merged commit 4331897 into quarkusio:main May 7, 2024
51 checks passed
@quarkus-bot quarkus-bot bot added this to the 3.11 - main milestone May 7, 2024
@franz1981
Copy link
Contributor

Yep @dmlloyd that would help I think because right now on SAP machine JVM seems to correctly work with OOM due to thread exhaustion, but there is nothing on OpenJDK for this..and on containers, can happen!

@cescoffier cescoffier deleted the log-503-on-exhaustion branch May 21, 2024 08:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants