-
Couldn't load subscription status.
- Fork 793
[CI][Bench] Combine all benchmark-related jobs in one place #20439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: sycl
Are you sure you want to change the base?
Changes from all commits
e59f6c1
d71c393
b3880c9
5930203
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,53 +1,19 @@ | ||
| name: Run Benchmarks | ||
| # A combined workflow for all benchmarks-related jobs for SYCL and UR. | ||
| # Supports both manual triggering (dispatch) and nightly runs. | ||
| # It also tests changes to benchmark scripts/framework in PR, if modified. | ||
| name: SYCL Run Benchmarks | ||
|
|
||
| on: | ||
| workflow_call: | ||
| inputs: | ||
| preset: | ||
| type: string | ||
| description: | | ||
| Benchmark presets to run: See /devops/scripts/benchmarks/presets.py | ||
| required: false | ||
| default: 'Minimal' # Only compute-benchmarks | ||
| pr_no: | ||
| type: string | ||
| description: | | ||
| PR no. to build SYCL from if specified: SYCL will be built from HEAD | ||
| of incoming branch used by the specified PR no. | ||
|
|
||
| If both pr_no and commit_hash are empty, the latest commit in | ||
| deployment branch will be used. | ||
| required: false | ||
| default: '' | ||
| commit_hash: | ||
| type: string | ||
| description: | | ||
| Commit hash (within intel/llvm) to build SYCL from if specified. | ||
|
|
||
| If both pr_no and commit_hash are empty, the latest commit in | ||
| deployment branch will be used. | ||
| required: false | ||
| default: '' | ||
| save_name: | ||
| type: string | ||
| description: | | ||
| Specify a custom name to use for the benchmark result: If uploading | ||
| results, this will be the name used to refer results from the current | ||
| run. | ||
| required: false | ||
| default: '' | ||
| upload_results: | ||
| type: string # true/false: workflow_dispatch does not support booleans | ||
| description: | | ||
| Upload results to https://intel.github.io/llvm/benchmarks/. | ||
| required: true | ||
| runner: | ||
| type: string | ||
| required: true | ||
| backend: | ||
| type: string | ||
| required: true | ||
|
|
||
| schedule: | ||
| # 3 hours ahead of SYCL nightly | ||
| - cron: '0 0 * * *' | ||
| # Run on pull requests only when a benchmark-related files were changed. | ||
| pull_request: | ||
| # These paths are exactly the same as in sycl-linux/windows-precommit.yml (to ignore over there) | ||
| paths: | ||
| - 'devops/scripts/benchmarks/**' | ||
| - 'devops/actions/run-tests/benchmark/**' | ||
| - '.github/workflows/sycl-ur-perf-benchmarking.yml' | ||
| workflow_dispatch: | ||
| inputs: | ||
| preset: | ||
|
|
@@ -60,6 +26,8 @@ on: | |
| - Minimal | ||
| - Normal | ||
| - Test | ||
| - Gromacs | ||
| - OneDNN | ||
| default: 'Minimal' # Only compute-benchmarks | ||
| pr_no: | ||
| type: string | ||
|
|
@@ -102,13 +70,14 @@ on: | |
| options: | ||
| - 'level_zero:gpu' | ||
| - 'level_zero_v2:gpu' | ||
| # As of #17407, sycl-linux-build now builds v2 by default | ||
|
|
||
| permissions: read-all | ||
|
|
||
| jobs: | ||
| sanitize_inputs: | ||
| name: Sanitize inputs | ||
| # Manual trigger (dispatch) path: | ||
| sanitize_inputs_dispatch: | ||
| name: '[Dispatch] Sanitize inputs' | ||
| if: github.event_name == 'workflow_dispatch' | ||
| runs-on: ubuntu-latest | ||
| env: | ||
| COMMIT_HASH: ${{ inputs.commit_hash }} | ||
|
|
@@ -156,25 +125,25 @@ jobs: | |
| echo "Final sanitized values:" | ||
| cat $GITHUB_OUTPUT | ||
|
|
||
| build_sycl: | ||
| name: Build SYCL | ||
| needs: [ sanitize_inputs ] | ||
| build_sycl_dispatch: | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. As a next step I'd like to verify if we could work with a single build job - right now I just copy-pasted build jobs from other workflows... but they differ in paramaters - e.g. sometimes we use clang compiler and sometimes (default) gcc - this would probably be discussed and we should just use the same build for all benchmark jobs, if possible. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I don't know the reason for different compilers usage, too. It seems a bit odd to be able to compare on-demand results with nightly ones with sycl built with a totally different compiler. |
||
| name: '[Dispatch] Build SYCL' | ||
| needs: [ sanitize_inputs_dispatch ] | ||
| uses: ./.github/workflows/sycl-linux-build.yml | ||
| with: | ||
| build_ref: ${{ needs.sanitize_inputs.outputs.build_ref }} | ||
| build_ref: ${{ needs.sanitize_inputs_dispatch.outputs.build_ref }} | ||
| build_cache_root: "/__w/" | ||
| build_cache_suffix: "prod_noassert" | ||
| build_configure_extra_args: "--no-assertions" | ||
| build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest" | ||
| cc: clang | ||
| cxx: clang++ | ||
| changes: '[]' | ||
|
|
||
| toolchain_artifact: sycl_linux_prod_noassert | ||
|
|
||
| run_benchmarks_build: | ||
| name: Run Benchmarks on Build | ||
| needs: [ build_sycl, sanitize_inputs ] | ||
| benchmark_dispatch: | ||
| name: '[Dispatch] Benchmarks' | ||
| needs: [ build_sycl_dispatch, sanitize_inputs_dispatch ] | ||
| if: always() && !cancelled() && needs.build_sycl_dispatch.outputs.build_conclusion == 'success' | ||
| strategy: | ||
| matrix: | ||
| include: | ||
|
|
@@ -184,16 +153,100 @@ jobs: | |
| uses: ./.github/workflows/sycl-linux-run-tests.yml | ||
| secrets: inherit | ||
| with: | ||
| name: Run compute-benchmarks (${{ matrix.save_name }}, ${{ matrix.runner }}, ${{ matrix.backend }}) | ||
| name: "Benchmarks (${{ matrix.runner }}, ${{ matrix.backend }}, preset: ${{ matrix.preset }})" | ||
| runner: ${{ matrix.runner }} | ||
| image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest | ||
| image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN | ||
| target_devices: ${{ matrix.backend }} | ||
| tests_selector: benchmarks | ||
| benchmark_upload_results: ${{ inputs.upload_results }} | ||
| benchmark_save_name: ${{ needs.sanitize_inputs.outputs.benchmark_save_name }} | ||
| benchmark_save_name: ${{ needs.sanitize_inputs_dispatch.outputs.benchmark_save_name }} | ||
| benchmark_preset: ${{ inputs.preset }} | ||
| repo_ref: ${{ needs.sanitize_inputs.outputs.build_ref }} | ||
| toolchain_artifact: ${{ needs.build_sycl.outputs.toolchain_artifact }} | ||
| toolchain_artifact_filename: ${{ needs.build_sycl.outputs.toolchain_artifact_filename }} | ||
| toolchain_decompress_command: ${{ needs.build_sycl.outputs.toolchain_decompress_command }} | ||
| repo_ref: ${{ needs.sanitize_inputs_dispatch.outputs.build_ref }} | ||
| toolchain_artifact: ${{ needs.build_sycl_dispatch.outputs.toolchain_artifact }} | ||
| toolchain_artifact_filename: ${{ needs.build_sycl_dispatch.outputs.toolchain_artifact_filename }} | ||
| toolchain_decompress_command: ${{ needs.build_sycl_dispatch.outputs.toolchain_decompress_command }} | ||
| # END manual trigger (dispatch) path | ||
|
|
||
| # Nightly benchmarking path: | ||
| build_nightly: | ||
| name: '[Nightly] Build SYCL' | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't look at the code for the jobs in depth but why do we need separate jobs depending on the workflow call type? |
||
| if: github.repository == 'intel/llvm' && github.event_name == 'schedule' | ||
| uses: ./.github/workflows/sycl-linux-build.yml | ||
| secrets: inherit | ||
| with: | ||
| build_cache_root: "/__w/" | ||
| build_configure_extra_args: '--no-assertions' | ||
| build_image: ghcr.io/intel/llvm/ubuntu2404_build:latest | ||
|
|
||
| toolchain_artifact: sycl_linux_default | ||
| toolchain_artifact_filename: sycl_linux.tar.gz | ||
|
|
||
| benchmark_nightly: | ||
| name: '[Nightly] Benchmarks' | ||
| needs: [build_nightly] | ||
| if: always() && !cancelled() && needs.build_nightly.outputs.build_conclusion == 'success' | ||
| strategy: | ||
| fail-fast: false | ||
| matrix: | ||
| runner: ['["PVC_PERF"]', '["BMG_PERF"]'] | ||
| backend: ['level_zero:gpu', 'level_zero_v2:gpu'] | ||
| include: | ||
| - ref: ${{ github.sha }} | ||
| save_name: 'Baseline' | ||
| preset: 'Full' | ||
| uses: ./.github/workflows/sycl-linux-run-tests.yml | ||
| secrets: inherit | ||
| with: | ||
| name: "Benchmarks (${{ matrix.runner }}, ${{ matrix.backend }}, preset: ${{ matrix.preset }})" | ||
| runner: ${{ matrix.runner }} | ||
| image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest | ||
| image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN | ||
| target_devices: ${{ matrix.backend }} | ||
| tests_selector: benchmarks | ||
| benchmark_upload_results: true | ||
| benchmark_save_name: ${{ matrix.save_name }} | ||
| benchmark_preset: ${{ matrix.preset }} | ||
| repo_ref: ${{ matrix.ref }} | ||
| toolchain_artifact: ${{ needs.build_nightly.outputs.toolchain_artifact }} | ||
| toolchain_artifact_filename: ${{ needs.build_nightly.outputs.toolchain_artifact_filename }} | ||
| toolchain_decompress_command: ${{ needs.build_nightly.outputs.toolchain_decompress_command }} | ||
| # END nightly benchmarking path | ||
|
|
||
| # Benchmark framework builds and runs on PRs path: | ||
| build_pr: | ||
| name: '[PR] Build SYCL' | ||
| if: github.event_name == 'pull_request' | ||
| uses: ./.github/workflows/sycl-linux-build.yml | ||
| with: | ||
| build_ref: ${{ github.sha }} | ||
| build_cache_root: "/__w/" | ||
| build_cache_suffix: "default" | ||
| # Docker image has last nightly pre-installed and added to the PATH | ||
| build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest" | ||
| cc: clang | ||
| cxx: clang++ | ||
| changes: ${{ needs.detect_changes.outputs.filters }} | ||
| toolchain_artifact: sycl_linux_default | ||
|
|
||
| # TODO: When we have stable BMG runner(s), consider moving this job to that runner. | ||
| test_benchmark_framework: | ||
| name: '[PR] Benchmark suite testing' | ||
| needs: [build_pr] | ||
| if: always() && !cancelled() && needs.build_pr.outputs.build_conclusion == 'success' | ||
| uses: ./.github/workflows/sycl-linux-run-tests.yml | ||
| with: | ||
| name: 'Framework test: PVC_PERF, L0, Minimal preset' | ||
| runner: '["PVC_PERF"]' | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @intel/llvm-reviewers-benchmarking Perhaps we should change this to BMG runner? I guess PVC runner is used more often for manual runs, so we wouldn't have to wait for benchmarks to finish to only test the framework...? What do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Once there are two stable BMG machines, this would be a good move. For now, the PVC machine is much, much, much more stable than the BMG one, so I would stay with the current config. Add TODO to consider the change in the future, though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You're so right. TODO added. |
||
| image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest | ||
| image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN | ||
| target_devices: 'level_zero:gpu' | ||
| tests_selector: benchmarks | ||
| benchmark_upload_results: false | ||
| benchmark_preset: 'Minimal' | ||
| benchmark_dry_run: true | ||
| repo_ref: ${{ github.sha }} | ||
| toolchain_artifact: ${{ needs.build.outputs.toolchain_artifact }} | ||
| toolchain_artifact_filename: ${{ needs.build.outputs.toolchain_artifact_filename }} | ||
| toolchain_decompress_command: ${{ needs.build.outputs.toolchain_decompress_command }} | ||
| # END benchmark framework builds and runs on PRs path | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment can be removed...?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would leave it, so no one would add these perf machines here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead we could add an error to check the runner and error if it's not a supported one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why shouldn't they be used? We aren't really disallowing that (one can still push a branch with modified
sycl-linux-run-tests.ymland then have a manual run from that branch).The whole point of this
workflow_dispatchis for debugging CI/ reducing CI load when making PRs to it. If UR benchmarking workflows do callsycl-linux-run-tests.ymlwith those parameters, then they are needed here for such experiments.