Skip to content

Conversation

@lukaszstolarczuk
Copy link
Contributor

plus cleanups. Each commit makes its own change - the most important part is the last commit.

name: Run Benchmarks

on:
# Only run on pull requests, when a benchmark-related files were changed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Run on pull requests only when... - to not suggest that jobs run only on pull requests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, done

sanitize_inputs:
# Manual trigger (dispatch) path:
sanitize_inputs_dispatch:
name: Sanitize inputs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a condition on a github event (dispatch only)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx! I've checked these if's and triggers, like 10 times now, but still keep finding issues in there. Hopefully now it's all good (I'll see if CI run what I wanted)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, it does look promising, on PR only PR jobs were executed.

I also tested dispatch on my fork (https://github.com/lukaszstolarczuk/llvm/actions/runs/18778151347/job/53577517025) - I don't have the runners, but it looks promising.

simulated nightly on my fork also looks promising: https://github.com/lukaszstolarczuk/llvm/actions/runs/18778773351

build_sycl:
name: Build SYCL
needs: [ sanitize_inputs ]
build_sycl_dispatch:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a next step I'd like to verify if we could work with a single build job - right now I just copy-pasted build jobs from other workflows... but they differ in paramaters - e.g. sometimes we use clang compiler and sometimes (default) gcc - this would probably be discussed and we should just use the same build for all benchmark jobs, if possible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I don't know the reason for different compilers usage, too. It seems a bit odd to be able to compare on-demand results with nightly ones with sycl built with a totally different compiler.

name: Run Benchmarks

on:
# Only run on pull requests, when a benchmark-related files were changed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, done

sanitize_inputs:
# Manual trigger (dispatch) path:
sanitize_inputs_dispatch:
name: Sanitize inputs
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx! I've checked these if's and triggers, like 10 times now, but still keep finding issues in there. Hopefully now it's all good (I'll see if CI run what I wanted)

uses: ./.github/workflows/sycl-linux-run-tests.yml
with:
name: 'Framework test: PVC_PERF, L0, Minimal preset'
runner: '["PVC_PERF"]'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@intel/llvm-reviewers-benchmarking Perhaps we should change this to BMG runner? I guess PVC runner is used more often for manual runs, so we wouldn't have to wait for benchmarks to finish to only test the framework...? What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once there are two stable BMG machines, this would be a good move. For now, the PVC machine is much, much, much more stable than the BMG one, so I would stay with the current config. Add TODO to consider the change in the future, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're so right. TODO added.

sanitize_inputs:
# Manual trigger (dispatch) path:
sanitize_inputs_dispatch:
name: Sanitize inputs
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, it does look promising, on PR only PR jobs were executed.

I also tested dispatch on my fork (https://github.com/lukaszstolarczuk/llvm/actions/runs/18778151347/job/53577517025) - I don't have the runners, but it looks promising.

simulated nightly on my fork also looks promising: https://github.com/lukaszstolarczuk/llvm/actions/runs/18778773351

@lukaszstolarczuk lukaszstolarczuk marked this pull request as ready for review October 24, 2025 11:59
@lukaszstolarczuk lukaszstolarczuk requested review from a team as code owners October 24, 2025 11:59
inputs:
runner:
# While it's ok to run benchmarks in this workflow, "*_PERF" machines
# shouldn't be used on manual dispatch. Instead, use sycl-ur-perf-benchmarking.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment can be removed...?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave it, so no one would add these perf machines here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead we could add an error to check the runner and error if it's not a supported one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why shouldn't they be used? We aren't really disallowing that (one can still push a branch with modified sycl-linux-run-tests.yml and then have a manual run from that branch).

The whole point of this workflow_dispatch is for debugging CI/ reducing CI load when making PRs to it. If UR benchmarking workflows do call sycl-linux-run-tests.yml with those parameters, then they are needed here for such experiments.

1. Right now changing only benchmark framework triggers all SYCL build.
  Thanks to this change only relevant changes are triggered for testing framework.
2. Nightly build was seperated. I believe keeping everything in one place makes
  it easier to maintain changes.

No changes in logic/builds were made in this commit - only minor cleanups
like names, plus changed the triggers.
inputs:
runner:
# While it's ok to run benchmarks in this workflow, "*_PERF" machines
# shouldn't be used on manual dispatch. Instead, use sycl-ur-perf-benchmarking.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead we could add an error to check the runner and error if it's not a supported one.


# Nightly benchmarking path:
build_nightly:
name: '[Nightly] Build SYCL'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't look at the code for the jobs in depth but why do we need separate jobs depending on the workflow call type?

Copy link
Contributor

@aelovikov-intel aelovikov-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-benchmarking workflow LGTM, my inline comment in sycl-linux-run-tests.yml is not affecting anybody outside benchmarking CI folks, so irrelevant.

Benchmarking workflow itself doesn't do anything dangerous/totally crazy so I'd be fine merging as-is. That said, I didn't review the benchmarking jobs in details (i.e., maintainability/best practices/etc.) but that's up to the folks involved with the benchmarking CI.

Please make sure @sarnex is fine with this before merging though.

@sarnex
Copy link
Contributor

sarnex commented Oct 24, 2025

My only blocking question is about the justification for the duplication

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants