Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark downloader script should take the latest score, not the smallest #8707

Merged
merged 1 commit into from
Jan 8, 2024

Conversation

Akirathan
Copy link
Member

@Akirathan Akirathan commented Jan 8, 2024

Thanks to the observations of @JaroslavTulach, I have noticed that on our benchmark results pages (for both engine and stdlib), we have a lot of nonsensical data.

Problem description

Every benchmark run generates bench-results.xml file with a format like this:

        <case>
            <label>org.enso.benchmarks.generated.Startup.empty_startup</label>
            <scores>
                <score>2823.877618</score>
            </scores>
        </case>

If the runner is not clean, the bench-results.xml file is not rewritten, but rather a new score value is appended, like so:

        <case>
            <label>org.enso.benchmarks.generated.Startup.empty_startup</label>
            <scores>
                <score>2823.877618</score>
                <score>3206.157912</score>
            </scores>
        </case>

The procedure in the bench_download.py script used to choose the lowest of these numbers:
https://github.com/enso-org/enso/blob/develop/tools/performance/engine-benchmarks/bench_download.py#L280-L284
This is obviously wrong. This PR fixes this by choosing the latest score, that is the last value.

TL;DR;

All our benchmark data from runners that were not clean are wrong! On this picture, on the right there is the old and wrong value, and on the left is the correct one:
image

The data was fetched from this snippet:
image

@Akirathan Akirathan added the CI: No changelog needed Do not require a changelog entry for this PR. label Jan 8, 2024
@Akirathan
Copy link
Member Author

Akirathan commented Jan 8, 2024

I have just regenerated the results on the servers on https://enso-org.github.io/engine-benchmark-results/. They are now correct and up-to-date.

@Akirathan Akirathan added the CI: Ready to merge This PR is eligible for automatic merge label Jan 8, 2024
@mergify mergify bot merged commit 4983550 into develop Jan 8, 2024
33 of 36 checks passed
@mergify mergify bot deleted the wip/akirathan/fix-bench-data branch January 8, 2024 19:17
@JaroslavTulach
Copy link
Member

Taking the last value is a good hotfix for now, but we need also fix in the CI. There is a removal code at

// Remove the benchmark reports. They are not meant currently to be incrementally

but it only works for engine/runtime benchmarks.

We need to make sure it also works for std-benchmarks. Why not remove bench*xml in the whole repository, @mwu-tow?

mergify bot pushed a commit that referenced this pull request Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI: No changelog needed Do not require a changelog entry for this PR. CI: Ready to merge This PR is eligible for automatic merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants