Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance dashboard: track and document performance improvements. #689

Closed
moul opened this issue Apr 2, 2023 · 7 comments · Fixed by #881
Closed

Performance dashboard: track and document performance improvements. #689

moul opened this issue Apr 2, 2023 · 7 comments · Fixed by #881
Assignees

Comments

@moul
Copy link
Member

moul commented Apr 2, 2023

Recognize the significance of performance in our project by creating dedicated documentation, review rules, and contributing guidelines.

I request @peter7891 to define basic rules to experiment in his PRs.

Idea curation framework: propose defining a framework for tracking performance improvements. While there are many good ideas to improve performance, it is important to prioritize them and consider reasons not to optimize certain parts.

Review framework: establish review rules, including the format for sharing performance improvement differences and a clear set of constraints associated with each improvement. Updates to the CONTRIBUTING.md?

Evolution framework: It would be beneficial to have a performance history with graphs depicting progress over time to identify regressions and highlight performance improvements.

Bonus: Write a technical blog post outlining upcoming challenges, our proposed framework, and diving into select parts of the project.

Relevant discussions:

@thehowl thehowl self-assigned this Apr 4, 2023
@thehowl
Copy link
Member

thehowl commented Apr 4, 2023

I've had a bit of a discussion with Petar on how to tackle this; he found a Continuous Benchmarking GitHub action that we could use or at least base ourselves off of. It has ways to display graphs and highlight regressions in benchmarks, so I think it's one important first step in doing this. I'll set up the action sometime, while you can expect many upcoming benchmarks from Petar to give us metrics to measure :)

@moul
Copy link
Member Author

moul commented Apr 5, 2023

Great, let's give it a try for a while and see how it goes.

@ajnavarro
Copy link
Contributor

ajnavarro commented Apr 17, 2023

I gave a try to the continuous benchmarking action you mentioned here. I discovered some issues:

  • Using standard GitHub action runners is not possible. After some runs, the benchmark workflow was killed due to the amount of used resources. To avoid this, I started to use a self-hosted runner. This will make benchmark results more consistent and will make possible the running of these tests for each commit if needed. One problem with that: GitHub does not recommend using self-hosted runners on public repositories.
  • Some benchmarks were failing. I fixed them on the mentioned PR.
  • Some benchmarks can be removed, they are testing Go performance itself.
  • Some benchmarks are timing out after 10 mins. We should refactor them to take less time, or we can have a separate batch of slow benchmarks to be executed less often (as @moul mentioned).

@thehowl
Copy link
Member

thehowl commented Apr 18, 2023

Seems weird that we're getting blocked due to using too many resources (looking through your tests they were killed at different times using SIGTERM?). Is this documented somewhere on github actions' doc? (I couldn't find anything). Might be worth a shot trying to trap SIGTERMs and see if we can still use github actions?

If we do have to take the self-hosted route, I think we can at the very least have them run on master without issue. As for on pull requests, AFAIK action runs from external contributors still have to be manually launched from the PR, so it may not be so much of an issue after all?

@ajnavarro
Copy link
Contributor

Seems weird that we're getting blocked due to using too many resources

Is this documented somewhere on github actions' doc?

This is the only info I found about the runners dying with code 143: actions/runner-images#6680

I think we can at the very least have them run on master without issue

Yeah, I was thinking of that as a solution. Every code merged into master could be considered safe to run.

@thehowl
Copy link
Member

thehowl commented Apr 18, 2023

It's pretty BS that this behaviour is only documented on that issue in vague terms. Nice to see Microsoft hasn't abandoned its old mantra of sweeping bad bits of their software under the rug until they come up as unpleasant surprises.

I think if we set up a machine only to do benchmarking it should be fine to use it also on PRs, since external PRs require team approval to run workflows anyway.

@moul
Copy link
Member Author

moul commented Apr 18, 2023

If we do have to take the self-hosted route
Custom runners for a strong CI are key to supporting contributors. Let's do it.
Cc @albttx.

We've created a new repository for benchmarks and tools to track everything: https://github.com/gnolang/benchmarks.
Cc @peter7891.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

4 participants