-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test benchmark theory #8668
Test benchmark theory #8668
Conversation
There is a minor but consistent uptick in our benchmarks. Testing Pavel's theory that it is due to logging.
I have removed the "Ready to merge" label. Please, let's first run benchmarks on this PR manually and compare it with the results on TLDR; Let's not be hasty with the merge. Let's first make sure that we understand what exactly causes the slowdown. |
Engine benchmarks manually scheduled in https://github.com/enso-org/enso/actions/runs/7400803887
|
@@ -73,7 +73,9 @@ public void initializeBenchmark(BenchmarkParams params) throws Exception { | |||
to_array vec = vec.to_array | |||
slice vec = vec.slice | |||
fill_proxy proxy vec = | |||
size v = vec.length | |||
size v = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR description says:
Testing Pavel's theory that it is due to logging.
How is the logging related to this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the parameter v
is unused, and if there is no logger, i.e., NopLoggingProvider
, no warning is produced, but if there is org.enso.logger.TestLogProvider
, a warning is produced to the stderr. That is just a theory. Generally, I think that the size of the data for this particular benchmark is too small, the average time of a single iteration is 0.0017
ms.
Looking into other engine benchmarks, I noticed there are several other slowdowns after #8620:
All these benchmarks are engine benchmarks, they are small - they have a very short duration of an iteration, under 1 ms. These benchmarks are also pretty stable, and after #8620, we can see at least +10% difference that is stable as well. I have an alternative theory. Apart from changing the log providers, I have also introduced module patching (options like |
- This partially reverts 7443683
Getting rid of the warning in |
Manually scheduling engine benchmarks after 72cbb39 in https://github.com/enso-org/enso/actions/runs/7410754751
|
Comparison of engine benchmark from https://github.com/enso-org/enso/actions/runs/7410754751 to the benchmarks on develop is at generated_site.zip. 72cbb39 did improve the performance to its previous values. So the slowdown was most likely caused by the module patching. Let's merge this PR.
|
There is a minor but consistent uptick in our benchmarks. Testing Pavel's theory that it is due to logging.
The slowdown of
VectorBenchmarks
occurred after #8620: