-
Couldn't load subscription status.
- Fork 793
[CI][Bench] Combine all benchmark-related jobs in one place #20439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sycl
Are you sure you want to change the base?
Conversation
| name: Run Benchmarks | ||
|
|
||
| on: | ||
| # Only run on pull requests, when a benchmark-related files were changed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run on pull requests only when... - to not suggest that jobs run only on pull requests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, done
| sanitize_inputs: | ||
| # Manual trigger (dispatch) path: | ||
| sanitize_inputs_dispatch: | ||
| name: Sanitize inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a condition on a github event (dispatch only)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx! I've checked these if's and triggers, like 10 times now, but still keep finding issues in there. Hopefully now it's all good (I'll see if CI run what I wanted)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, it does look promising, on PR only PR jobs were executed.
I also tested dispatch on my fork (https://github.com/lukaszstolarczuk/llvm/actions/runs/18778151347/job/53577517025) - I don't have the runners, but it looks promising.
simulated nightly on my fork also looks promising: https://github.com/lukaszstolarczuk/llvm/actions/runs/18778773351
0060067 to
c33827e
Compare
| build_sycl: | ||
| name: Build SYCL | ||
| needs: [ sanitize_inputs ] | ||
| build_sycl_dispatch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a next step I'd like to verify if we could work with a single build job - right now I just copy-pasted build jobs from other workflows... but they differ in paramaters - e.g. sometimes we use clang compiler and sometimes (default) gcc - this would probably be discussed and we should just use the same build for all benchmark jobs, if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I don't know the reason for different compilers usage, too. It seems a bit odd to be able to compare on-demand results with nightly ones with sycl built with a totally different compiler.
| name: Run Benchmarks | ||
|
|
||
| on: | ||
| # Only run on pull requests, when a benchmark-related files were changed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right, done
| sanitize_inputs: | ||
| # Manual trigger (dispatch) path: | ||
| sanitize_inputs_dispatch: | ||
| name: Sanitize inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thx! I've checked these if's and triggers, like 10 times now, but still keep finding issues in there. Hopefully now it's all good (I'll see if CI run what I wanted)
| uses: ./.github/workflows/sycl-linux-run-tests.yml | ||
| with: | ||
| name: 'Framework test: PVC_PERF, L0, Minimal preset' | ||
| runner: '["PVC_PERF"]' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@intel/llvm-reviewers-benchmarking Perhaps we should change this to BMG runner? I guess PVC runner is used more often for manual runs, so we wouldn't have to wait for benchmarks to finish to only test the framework...? What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once there are two stable BMG machines, this would be a good move. For now, the PVC machine is much, much, much more stable than the BMG one, so I would stay with the current config. Add TODO to consider the change in the future, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're so right. TODO added.
| sanitize_inputs: | ||
| # Manual trigger (dispatch) path: | ||
| sanitize_inputs_dispatch: | ||
| name: Sanitize inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, it does look promising, on PR only PR jobs were executed.
I also tested dispatch on my fork (https://github.com/lukaszstolarczuk/llvm/actions/runs/18778151347/job/53577517025) - I don't have the runners, but it looks promising.
simulated nightly on my fork also looks promising: https://github.com/lukaszstolarczuk/llvm/actions/runs/18778773351
| inputs: | ||
| runner: | ||
| # While it's ok to run benchmarks in this workflow, "*_PERF" machines | ||
| # shouldn't be used on manual dispatch. Instead, use sycl-ur-perf-benchmarking.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment can be removed...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would leave it, so no one would add these perf machines here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead we could add an error to check the runner and error if it's not a supported one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why shouldn't they be used? We aren't really disallowing that (one can still push a branch with modified sycl-linux-run-tests.yml and then have a manual run from that branch).
The whole point of this workflow_dispatch is for debugging CI/ reducing CI load when making PRs to it. If UR benchmarking workflows do call sycl-linux-run-tests.yml with those parameters, then they are needed here for such experiments.
1. Right now changing only benchmark framework triggers all SYCL build. Thanks to this change only relevant changes are triggered for testing framework. 2. Nightly build was seperated. I believe keeping everything in one place makes it easier to maintain changes. No changes in logic/builds were made in this commit - only minor cleanups like names, plus changed the triggers.
…her SYCL workflows
c33827e to
5930203
Compare
| inputs: | ||
| runner: | ||
| # While it's ok to run benchmarks in this workflow, "*_PERF" machines | ||
| # shouldn't be used on manual dispatch. Instead, use sycl-ur-perf-benchmarking.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead we could add an error to check the runner and error if it's not a supported one.
|
|
||
| # Nightly benchmarking path: | ||
| build_nightly: | ||
| name: '[Nightly] Build SYCL' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't look at the code for the jobs in depth but why do we need separate jobs depending on the workflow call type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-benchmarking workflow LGTM, my inline comment in sycl-linux-run-tests.yml is not affecting anybody outside benchmarking CI folks, so irrelevant.
Benchmarking workflow itself doesn't do anything dangerous/totally crazy so I'd be fine merging as-is. That said, I didn't review the benchmarking jobs in details (i.e., maintainability/best practices/etc.) but that's up to the folks involved with the benchmarking CI.
Please make sure @sarnex is fine with this before merging though.
|
My only blocking question is about the justification for the duplication |
plus cleanups. Each commit makes its own change - the most important part is the last commit.