Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance regression in RequestContext activations #27735

Closed
Sanne opened this issue Sep 5, 2022 · 9 comments · Fixed by #27736
Closed

Performance regression in RequestContext activations #27735

Sanne opened this issue Sep 5, 2022 · 9 comments · Fixed by #27736
Assignees
Labels
kind/bug Something isn't working triage/regression
Milestone

Comments

@Sanne
Copy link
Member

Sanne commented Sep 5, 2022

Describe the bug

There is a performance regression caused by #27249 : the logging statements are allocating various java.util.stream.SliceOps$1 and byte[] + StringBuilder to encode numbers as hex, capturing and formatting the stacktraces.

These logging statements are protected by a check for TRACE being enabled, but it would seem the value of this is true at the point it's captured (during Arc recorder) even when not configured.

Expected behavior

No significant overhead should be introduced.

@mkouba
Copy link
Contributor

mkouba commented Sep 6, 2022

These logging statements are protected by a check for TRACE being enabled, but it would seem the value of this is true at the point it's captured (during Arc recorder) even when not configured.

Do we know why this actually happens?

@Sanne
Copy link
Member Author

Sanne commented Sep 6, 2022

These logging statements are protected by a check for TRACE being enabled, but it would seem the value of this is true at the point it's captured (during Arc recorder) even when not configured.

Do we know why this actually happens?

I don't know exactly, sorry. Only remembered similar issues we had in the past - would be great to make it easier to catch such a mistake - it's very misleading and clearly anyone could fall it into.

@gsmet gsmet modified the milestones: 2.13 - main, 2.12.1.Final Sep 6, 2022
@galderz
Copy link
Member

galderz commented Sep 8, 2022

There was a slightly similar issue with Infinispan and other middleware projects that cached TRACE enabled into static final variables. These would be turn out to be true at native executable runtime, even if they were built with TRACE disabled. The case here seems slightly different since caching the value of TRACE enabled is done on an instance variable rather than static variable, but maybe the cause is the same. However, as noted in infinispan/infinispan#8921, we eventually moved away from caching this at all to enable optimizations in #13376 to kick in.

@aldettinger
Copy link

Well done with this correction. I have a non-reactive performance regression test that did not catch this one.

#27743 suggests that the regression happens only in reactive mode ? could you please confirm this ?

@geoand
Copy link
Contributor

geoand commented Sep 8, 2022

@aldettinger, the regression happens with any kind usage of RESTEasy Reactive. We have not tested RESTEasy Classic, but it is likely affected as well.

@aldettinger
Copy link

Ok, that's interesting to know. Many thanks @geoand .

@geoand
Copy link
Contributor

geoand commented Sep 8, 2022

👍🏼

@GavinRay97
Copy link
Contributor

GavinRay97 commented Sep 8, 2022

Great catch

A bit spooky that you can't trust the code you write to always be properly evaluated 😨

Is there any way to check this/know which sorts of things like this might happen, if you're developing on Quarkus?

@geoand
Copy link
Contributor

geoand commented Sep 8, 2022

This kind of thing should never happen in user code.
The reason is happened in Arc code is that some specifics parts of Quarkus are loaded before the entire logging setup is complete.

geoand added a commit to geoand/quarkus that referenced this issue Sep 9, 2022
…evel

This essentially prevents issues like quarkusio#27735
where a piece of Quarkus code executing very early in the startup sequence,
would improperly determine the minimum logging level - i.e. ALL was used
instead of the Quarkus build configured minimum level.
geoand added a commit to geoand/quarkus that referenced this issue Sep 9, 2022
…evel

This essentially prevents issues like quarkusio#27735
where a piece of Quarkus code executing very early in the startup sequence,
would improperly determine the minimum logging level - i.e. ALL was used
instead of the Quarkus build configured minimum level.
geoand added a commit to geoand/quarkus that referenced this issue Sep 9, 2022
…evel

This essentially prevents issues like quarkusio#27735
where a piece of Quarkus code executing very early in the startup sequence,
would improperly determine the minimum logging level - i.e. ALL was used
instead of the Quarkus build configured minimum level.
geoand added a commit to geoand/quarkus that referenced this issue Sep 9, 2022
…evel

This essentially prevents issues like quarkusio#27735
where a piece of Quarkus code executing very early in the startup sequence,
would improperly determine the minimum logging level - i.e. ALL was used
instead of the Quarkus build configured minimum level.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working triage/regression
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants