New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance regression in RequestContext activations #27735
Comments
Do we know why this actually happens? |
I don't know exactly, sorry. Only remembered similar issues we had in the past - would be great to make it easier to catch such a mistake - it's very misleading and clearly anyone could fall it into. |
There was a slightly similar issue with Infinispan and other middleware projects that cached TRACE enabled into static final variables. These would be turn out to be true at native executable runtime, even if they were built with TRACE disabled. The case here seems slightly different since caching the value of TRACE enabled is done on an instance variable rather than static variable, but maybe the cause is the same. However, as noted in infinispan/infinispan#8921, we eventually moved away from caching this at all to enable optimizations in #13376 to kick in. |
Well done with this correction. I have a non-reactive performance regression test that did not catch this one. #27743 suggests that the regression happens only in reactive mode ? could you please confirm this ? |
@aldettinger, the regression happens with any kind usage of RESTEasy Reactive. We have not tested RESTEasy Classic, but it is likely affected as well. |
Ok, that's interesting to know. Many thanks @geoand . |
👍🏼 |
Great catch A bit spooky that you can't trust the code you write to always be properly evaluated 😨 Is there any way to check this/know which sorts of things like this might happen, if you're developing on Quarkus? |
This kind of thing should never happen in user code. |
…evel This essentially prevents issues like quarkusio#27735 where a piece of Quarkus code executing very early in the startup sequence, would improperly determine the minimum logging level - i.e. ALL was used instead of the Quarkus build configured minimum level.
…evel This essentially prevents issues like quarkusio#27735 where a piece of Quarkus code executing very early in the startup sequence, would improperly determine the minimum logging level - i.e. ALL was used instead of the Quarkus build configured minimum level.
…evel This essentially prevents issues like quarkusio#27735 where a piece of Quarkus code executing very early in the startup sequence, would improperly determine the minimum logging level - i.e. ALL was used instead of the Quarkus build configured minimum level.
…evel This essentially prevents issues like quarkusio#27735 where a piece of Quarkus code executing very early in the startup sequence, would improperly determine the minimum logging level - i.e. ALL was used instead of the Quarkus build configured minimum level.
Describe the bug
There is a performance regression caused by #27249 : the logging statements are allocating various
java.util.stream.SliceOps$1
andbyte[]
+StringBuilder
to encode numbers as hex, capturing and formatting the stacktraces.These logging statements are protected by a check for TRACE being enabled, but it would seem the value of this is true at the point it's captured (during Arc recorder) even when not configured.
Expected behavior
No significant overhead should be introduced.
The text was updated successfully, but these errors were encountered: