Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test benchmark theory #8668

Merged
merged 2 commits into from
Jan 4, 2024
Merged

Test benchmark theory #8668

merged 2 commits into from
Jan 4, 2024

Conversation

hubertp
Copy link
Contributor

@hubertp hubertp commented Jan 3, 2024

There is a minor but consistent uptick in our benchmarks. Testing Pavel's theory that it is due to logging.

The slowdown of VectorBenchmarks occurred after #8620:

image

There is a minor but consistent uptick in our benchmarks. Testing
Pavel's theory that it is due to logging.
@hubertp hubertp added CI: Ready to merge This PR is eligible for automatic merge CI: No changelog needed Do not require a changelog entry for this PR. labels Jan 3, 2024
@Akirathan Akirathan removed the CI: Ready to merge This PR is eligible for automatic merge label Jan 3, 2024
@Akirathan
Copy link
Member

I have removed the "Ready to merge" label. Please, let's first run benchmarks on this PR manually and compare it with the results on develop before merging. The only case when merging this change to develop eagerly makes sense is if we wanted to observe the VectorBenchmarks for a longer time. But even then, it is possible to manually schedule benchmarks on this PR for multiple times and compare all the results to the results from develop.

TLDR; Let's not be hasty with the merge. Let's first make sure that we understand what exactly causes the slowdown.

@Akirathan
Copy link
Member

Akirathan commented Jan 3, 2024

Engine benchmarks manually scheduled in https://github.com/enso-org/enso/actions/runs/7400803887

GitHub
Hybrid visual and textual functional programming. Contribute to enso-org/enso development by creating an account on GitHub.

@@ -73,7 +73,9 @@ public void initializeBenchmark(BenchmarkParams params) throws Exception {
to_array vec = vec.to_array
slice vec = vec.slice
fill_proxy proxy vec =
size v = vec.length
size v =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says:

Testing Pavel's theory that it is due to logging.

How is the logging related to this change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the parameter v is unused, and if there is no logger, i.e., NopLoggingProvider, no warning is produced, but if there is org.enso.logger.TestLogProvider, a warning is produced to the stderr. That is just a theory. Generally, I think that the size of the data for this particular benchmark is too small, the average time of a single iteration is 0.0017 ms.

@Akirathan
Copy link
Member

Looking into other engine benchmarks, I noticed there are several other slowdowns after #8620:

  • VectorBenchmarks.
  • org_enso_interpreter_bench_benchmarks_semantic_StringBenchmarks_lengthOfStrings
  • TypePatternBenchmarks.

All these benchmarks are engine benchmarks, they are small - they have a very short duration of an iteration, under 1 ms. These benchmarks are also pretty stable, and after #8620, we can see at least +10% difference that is stable as well.

I have an alternative theory. Apart from changing the log providers, I have also introduced module patching (options like --patch-module, --add-exports, etc.) so that the runtime.jar fat jar does not have to be reassembled. I might have been too hasty there. Maybe there is some observable overhead when you are too creative with module patching and JVM does not like it. I will try to revert these changes, and run the engine benchmarks with built runtime.jar fat jar, to mimic the runtime properties of the engine distribution.

- This partially reverts 7443683
@Akirathan
Copy link
Member

Engine benchmarks manually scheduled in enso-org/enso/actions/runs/7400803887

GitHub**Benchmark Engine · enso-org/enso@8b2a8d7**Hybrid visual and textual functional programming. Contribute to enso-org/enso development by creating an account on GitHub.

Getting rid of the warning in org.enso.interpreter.bench.benchmarks.semantic.VectorBenchmarks.averageOverVector (8b2a8d7) did not provide any differences in the benchmarks

@Akirathan
Copy link
Member

Akirathan commented Jan 4, 2024

Manually scheduling engine benchmarks after 72cbb39 in https://github.com/enso-org/enso/actions/runs/7410754751

GitHub
Hybrid visual and textual functional programming. Contribute to enso-org/enso development by creating an account on GitHub.

@Akirathan
Copy link
Member

Akirathan commented Jan 4, 2024

Comparison of engine benchmark from https://github.com/enso-org/enso/actions/runs/7410754751 to the benchmarks on develop is at generated_site.zip. 72cbb39 did improve the performance to its previous values. So the slowdown was most likely caused by the module patching.

Let's merge this PR.

GitHub
Hybrid visual and textual functional programming. Contribute to enso-org/enso development by creating an account on GitHub.

@Akirathan Akirathan added the CI: Ready to merge This PR is eligible for automatic merge label Jan 4, 2024
@mergify mergify bot merged commit 41fe87f into develop Jan 4, 2024
32 of 34 checks passed
@mergify mergify bot deleted the wip/hubert/test-benchmark-theory branch January 4, 2024 16:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI: No changelog needed Do not require a changelog entry for this PR. CI: Ready to merge This PR is eligible for automatic merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants