Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify CI benchmark comparison development #9638

Open
gruuya opened this issue Mar 16, 2024 · 1 comment
Open

Simplify CI benchmark comparison development #9638

gruuya opened this issue Mar 16, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@gruuya
Copy link
Contributor

gruuya commented Mar 16, 2024

Is your feature request related to a problem or challenge?

The present CI benches use the issue_comment event type, and are triggered by a /benchmark keyword in a PR comment.

This now works, but there are some issues with the dev experience around it, as demonstrated in #9620.

In particular the issue_comment event type will use the workflow as it is on the default (i.e. main) branch, meaning any changes introduced to it via a PR can't be tested until the PR is merged, which is suboptimal.

Describe the solution you'd like

The proper way to test new changes to this workflow is to use the pull_request event type, which would enable a tight feedback loop during development (i.e. testing the workflow version that is in the PR code). However to my knowledge this event type can't be triggered by a PR comment then (the sole purpose of this is to reduce noise, and don't benchmark/comment on every PR/change).

Instead, the pull_request-based workflow can be made conditional upon a label.

In addition, the Benchmarks workflow could be triggered through a workflow_call event from inside the Rust workflow (which itself has a pull_request event trigger). This would ensure that the benches could be run only if the build/tests pass. One downside to this though is that the whole chain would need to be kickstarted again (by a push or manually) even once the label has been added to get the first benchmark results. Every subsequent run would then perform benchmarks (as long as the label is present).

Describe alternatives you've considered

Continue with multi-step development, since this is probably not going to be changed that often.

Additional context

No response

@gruuya
Copy link
Contributor Author

gruuya commented Mar 29, 2024

Related discussion: https://github.com/orgs/community/discussions/59389

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant