Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added multi-os runners for benchmark & implemented luau analyze #542

Merged
merged 209 commits into from
Jun 24, 2022

Conversation

AllanJeremy
Copy link
Contributor

@AllanJeremy AllanJeremy commented Jun 16, 2022

Summary

Added benchmark runners for the following operating systems:

  • Ubuntu (latest)
  • MacOS (latest)
  • Windows (latest) x64
  • Windows (latest) Win32

Implemented luau analyze static analysis on LuauPolyfill's Map implementation:

  • Runs static analysis and measures time taken
  • Runs static analysis under cachegrind and measures time taken

Test runs

The only thing that has changed since those runs is enabling the pushing of results as well as updating the workflow trigger to only be on push to master.


Resolves #527
Resolves #529
Resolves #539
Resolves #540
Resolves #541

TODO: Test this upstream
Removed duplicate storage of benchmark output files
Using the one from the README now as opposed to the one from the build.yml
@AllanJeremy AllanJeremy marked this pull request as ready for review June 16, 2022 15:17
@zeux
Copy link
Collaborator

zeux commented Jun 17, 2022

Any sense as to what the variance is on Windows/macOS? The Linux data so far makes me think we might have to refocus on Cachegrind exclusively (which would be Linux only obviously).

bench/measure_time.py Outdated Show resolved Hide resolved
@AllanJeremy
Copy link
Contributor Author

AllanJeremy commented Jun 24, 2022

Any sense as to what the variance is on Windows/macOS? The Linux data so far makes me think we might have to refocus on Cachegrind exclusively (which would be Linux only obviously).

@zeux The variance on windows and MacOS seems erratic to me. On another note though, I would presume the performance on Windows builds and MacOS builds would vary because of the variance in how the different OSes handle resources.

On the other hand, it may make sense to refocus to Cachegrind (if we can use the performance of the build on a single OS as a baseline metric for measuring general performance). However, does that mean that we would get rid of the non-cachegrind runs on linux or does it simply mean that we would get rid of the Windows and MacOS jobs?

@zeux
Copy link
Collaborator

zeux commented Jun 24, 2022

The performance is definitely going to be different based on OS, it's just that any build that has a high variance in the measurement is likely not going to be a useful data source, as our optimizations tend to be rather small individually.

That said, I think for now the best path is to merge this and then assess the variance across more than just a couple builds, and then we can make the judgment as to which data sources to keep.

@zeux zeux merged commit 5e405b5 into luau-lang:master Jun 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment