Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/sycl-detect-changes.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ jobs:
- devops/dependencies-igc-dev.json
benchmarks:
- 'devops/scripts/benchmarks/**'
- 'devops/actions/run-tests/benchmark/**'
- '.github/workflows/sycl-ur-perf-benchmarking.yml'
perf-tests:
- sycl/test-e2e/PerformanceTests/**
esimd:
Expand Down
26 changes: 4 additions & 22 deletions .github/workflows/sycl-linux-precommit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ on:
- sycl
- sycl-rel-**
# Do not run builds if changes are only in the following locations
# Note: benchmark-related paths are the same as in sycl-ur-perf-benchmarking.yml (to run there instead)
paths-ignore:
- '.github/ISSUE_TEMPLATE/**'
- '.github/CODEOWNERS'
Expand All @@ -31,6 +32,9 @@ on:
- 'unified-runtime/test/**'
- 'unified-runtime/third_party/**'
- 'unified-runtime/tools/**'
- 'devops/scripts/benchmarks/**'
- 'devops/actions/run-tests/benchmark/**'
- '.github/workflows/sycl-ur-perf-benchmarking.yml'

concurrency:
# Cancel a currently running workflow from the same PR, branch or tag.
Expand Down Expand Up @@ -224,28 +228,6 @@ jobs:
skip_run: ${{matrix.use_igc_dev && contains(github.event.pull_request.labels.*.name, 'ci-no-devigc') || 'false'}}
env: ${{ matrix.env || (contains(needs.detect_changes.outputs.filters, 'esimd') && '{}' || '{"LIT_FILTER_OUT":"ESIMD/"}') }}

test_benchmark_scripts:
needs: [build, detect_changes]
if: |
always() && !cancelled()
&& needs.build.outputs.build_conclusion == 'success'
&& contains(needs.detect_changes.outputs.filters, 'benchmarks')
uses: ./.github/workflows/sycl-linux-run-tests.yml
with:
name: Benchmark suite precommit testing
runner: '["PVC_PERF"]'
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: 'level_zero:gpu'
tests_selector: benchmarks
benchmark_upload_results: false
benchmark_preset: 'Minimal'
benchmark_dry_run: true
repo_ref: ${{ github.sha }}
toolchain_artifact: ${{ needs.build.outputs.toolchain_artifact }}
toolchain_artifact_filename: ${{ needs.build.outputs.toolchain_artifact_filename }}
toolchain_decompress_command: ${{ needs.build.outputs.toolchain_decompress_command }}

test-perf:
needs: [build, detect_changes]
if: |
Expand Down
6 changes: 4 additions & 2 deletions .github/workflows/sycl-linux-run-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@ on:
type: string
default: 'Minimal'
required: False
# dry-run is passed only to compare.py (to not fail on regression), not to main.py (where such flag would omit all benchmark runs)
benchmark_dry_run:
description: |
Whether or not to fail the workflow upon a regression.
Expand All @@ -144,6 +145,9 @@ on:
workflow_dispatch:
inputs:
runner:
# While it's ok to run benchmarks in this workflow, "*_PERF" machines
# shouldn't be used on manual dispatch. Instead, use sycl-ur-perf-benchmarking.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment can be removed...?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave it, so no one would add these perf machines here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead we could add an error to check the runner and error if it's not a supported one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why shouldn't they be used? We aren't really disallowing that (one can still push a branch with modified sycl-linux-run-tests.yml and then have a manual run from that branch).

The whole point of this workflow_dispatch is for debugging CI/ reducing CI load when making PRs to it. If UR benchmarking workflows do call sycl-linux-run-tests.yml with those parameters, then they are needed here for such experiments.

# for manual execution of benchmarks on "*_PERF" machines.
type: choice
options:
- '["Linux", "gen12"]'
Expand All @@ -153,7 +157,6 @@ on:
- '["cts-cpu"]'
- '["Linux", "build"]'
- '["cuda"]'
- '["PVC_PERF"]'
image:
type: choice
options:
Expand Down Expand Up @@ -182,7 +185,6 @@ on:
options:
- e2e
- cts
- benchmarks
toolchain_release_tag:
description: |
Tag of a "Nightly" release at https://github.com/intel/llvm/releases.
Expand Down
52 changes: 0 additions & 52 deletions .github/workflows/sycl-nightly-benchmarking.yml

This file was deleted.

183 changes: 118 additions & 65 deletions .github/workflows/sycl-ur-perf-benchmarking.yml
Original file line number Diff line number Diff line change
@@ -1,53 +1,19 @@
name: Run Benchmarks
# A combined workflow for all benchmarks-related jobs for SYCL and UR.
# Supports both manual triggering (dispatch) and nightly runs.
# It also tests changes to benchmark scripts/framework in PR, if modified.
name: SYCL Run Benchmarks

on:
workflow_call:
inputs:
preset:
type: string
description: |
Benchmark presets to run: See /devops/scripts/benchmarks/presets.py
required: false
default: 'Minimal' # Only compute-benchmarks
pr_no:
type: string
description: |
PR no. to build SYCL from if specified: SYCL will be built from HEAD
of incoming branch used by the specified PR no.

If both pr_no and commit_hash are empty, the latest commit in
deployment branch will be used.
required: false
default: ''
commit_hash:
type: string
description: |
Commit hash (within intel/llvm) to build SYCL from if specified.

If both pr_no and commit_hash are empty, the latest commit in
deployment branch will be used.
required: false
default: ''
save_name:
type: string
description: |
Specify a custom name to use for the benchmark result: If uploading
results, this will be the name used to refer results from the current
run.
required: false
default: ''
upload_results:
type: string # true/false: workflow_dispatch does not support booleans
description: |
Upload results to https://intel.github.io/llvm/benchmarks/.
required: true
runner:
type: string
required: true
backend:
type: string
required: true

schedule:
# 3 hours ahead of SYCL nightly
- cron: '0 0 * * *'
# Run on pull requests only when a benchmark-related files were changed.
pull_request:
# These paths are exactly the same as in sycl-linux/windows-precommit.yml (to ignore over there)
paths:
- 'devops/scripts/benchmarks/**'
- 'devops/actions/run-tests/benchmark/**'
- '.github/workflows/sycl-ur-perf-benchmarking.yml'
workflow_dispatch:
inputs:
preset:
Expand All @@ -60,6 +26,8 @@ on:
- Minimal
- Normal
- Test
- Gromacs
- OneDNN
default: 'Minimal' # Only compute-benchmarks
pr_no:
type: string
Expand Down Expand Up @@ -102,13 +70,14 @@ on:
options:
- 'level_zero:gpu'
- 'level_zero_v2:gpu'
# As of #17407, sycl-linux-build now builds v2 by default

permissions: read-all

jobs:
sanitize_inputs:
name: Sanitize inputs
# Manual trigger (dispatch) path:
sanitize_inputs_dispatch:
name: '[Dispatch] Sanitize inputs'
if: github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
env:
COMMIT_HASH: ${{ inputs.commit_hash }}
Expand Down Expand Up @@ -156,25 +125,25 @@ jobs:
echo "Final sanitized values:"
cat $GITHUB_OUTPUT

build_sycl:
name: Build SYCL
needs: [ sanitize_inputs ]
build_sycl_dispatch:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a next step I'd like to verify if we could work with a single build job - right now I just copy-pasted build jobs from other workflows... but they differ in paramaters - e.g. sometimes we use clang compiler and sometimes (default) gcc - this would probably be discussed and we should just use the same build for all benchmark jobs, if possible.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I don't know the reason for different compilers usage, too. It seems a bit odd to be able to compare on-demand results with nightly ones with sycl built with a totally different compiler.

name: '[Dispatch] Build SYCL'
needs: [ sanitize_inputs_dispatch ]
uses: ./.github/workflows/sycl-linux-build.yml
with:
build_ref: ${{ needs.sanitize_inputs.outputs.build_ref }}
build_ref: ${{ needs.sanitize_inputs_dispatch.outputs.build_ref }}
build_cache_root: "/__w/"
build_cache_suffix: "prod_noassert"
build_configure_extra_args: "--no-assertions"
build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest"
cc: clang
cxx: clang++
changes: '[]'

toolchain_artifact: sycl_linux_prod_noassert

run_benchmarks_build:
name: Run Benchmarks on Build
needs: [ build_sycl, sanitize_inputs ]
benchmark_dispatch:
name: '[Dispatch] Benchmarks'
needs: [ build_sycl_dispatch, sanitize_inputs_dispatch ]
if: always() && !cancelled() && needs.build_sycl_dispatch.outputs.build_conclusion == 'success'
strategy:
matrix:
include:
Expand All @@ -184,16 +153,100 @@ jobs:
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
name: Run compute-benchmarks (${{ matrix.save_name }}, ${{ matrix.runner }}, ${{ matrix.backend }})
name: "Benchmarks (${{ matrix.runner }}, ${{ matrix.backend }}, preset: ${{ matrix.preset }})"
runner: ${{ matrix.runner }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
tests_selector: benchmarks
benchmark_upload_results: ${{ inputs.upload_results }}
benchmark_save_name: ${{ needs.sanitize_inputs.outputs.benchmark_save_name }}
benchmark_save_name: ${{ needs.sanitize_inputs_dispatch.outputs.benchmark_save_name }}
benchmark_preset: ${{ inputs.preset }}
repo_ref: ${{ needs.sanitize_inputs.outputs.build_ref }}
toolchain_artifact: ${{ needs.build_sycl.outputs.toolchain_artifact }}
toolchain_artifact_filename: ${{ needs.build_sycl.outputs.toolchain_artifact_filename }}
toolchain_decompress_command: ${{ needs.build_sycl.outputs.toolchain_decompress_command }}
repo_ref: ${{ needs.sanitize_inputs_dispatch.outputs.build_ref }}
toolchain_artifact: ${{ needs.build_sycl_dispatch.outputs.toolchain_artifact }}
toolchain_artifact_filename: ${{ needs.build_sycl_dispatch.outputs.toolchain_artifact_filename }}
toolchain_decompress_command: ${{ needs.build_sycl_dispatch.outputs.toolchain_decompress_command }}
# END manual trigger (dispatch) path

# Nightly benchmarking path:
build_nightly:
name: '[Nightly] Build SYCL'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't look at the code for the jobs in depth but why do we need separate jobs depending on the workflow call type?

if: github.repository == 'intel/llvm' && github.event_name == 'schedule'
uses: ./.github/workflows/sycl-linux-build.yml
secrets: inherit
with:
build_cache_root: "/__w/"
build_configure_extra_args: '--no-assertions'
build_image: ghcr.io/intel/llvm/ubuntu2404_build:latest

toolchain_artifact: sycl_linux_default
toolchain_artifact_filename: sycl_linux.tar.gz

benchmark_nightly:
name: '[Nightly] Benchmarks'
needs: [build_nightly]
if: always() && !cancelled() && needs.build_nightly.outputs.build_conclusion == 'success'
strategy:
fail-fast: false
matrix:
runner: ['["PVC_PERF"]', '["BMG_PERF"]']
backend: ['level_zero:gpu', 'level_zero_v2:gpu']
include:
- ref: ${{ github.sha }}
save_name: 'Baseline'
preset: 'Full'
uses: ./.github/workflows/sycl-linux-run-tests.yml
secrets: inherit
with:
name: "Benchmarks (${{ matrix.runner }}, ${{ matrix.backend }}, preset: ${{ matrix.preset }})"
runner: ${{ matrix.runner }}
image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: ${{ matrix.backend }}
tests_selector: benchmarks
benchmark_upload_results: true
benchmark_save_name: ${{ matrix.save_name }}
benchmark_preset: ${{ matrix.preset }}
repo_ref: ${{ matrix.ref }}
toolchain_artifact: ${{ needs.build_nightly.outputs.toolchain_artifact }}
toolchain_artifact_filename: ${{ needs.build_nightly.outputs.toolchain_artifact_filename }}
toolchain_decompress_command: ${{ needs.build_nightly.outputs.toolchain_decompress_command }}
# END nightly benchmarking path

# Benchmark framework builds and runs on PRs path:
build_pr:
name: '[PR] Build SYCL'
if: github.event_name == 'pull_request'
uses: ./.github/workflows/sycl-linux-build.yml
with:
build_ref: ${{ github.sha }}
build_cache_root: "/__w/"
build_cache_suffix: "default"
# Docker image has last nightly pre-installed and added to the PATH
build_image: "ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest"
cc: clang
cxx: clang++
changes: ${{ needs.detect_changes.outputs.filters }}
toolchain_artifact: sycl_linux_default

# TODO: When we have stable BMG runner(s), consider moving this job to that runner.
test_benchmark_framework:
name: '[PR] Benchmark suite testing'
needs: [build_pr]
if: always() && !cancelled() && needs.build_pr.outputs.build_conclusion == 'success'
uses: ./.github/workflows/sycl-linux-run-tests.yml
with:
name: 'Framework test: PVC_PERF, L0, Minimal preset'
runner: '["PVC_PERF"]'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@intel/llvm-reviewers-benchmarking Perhaps we should change this to BMG runner? I guess PVC runner is used more often for manual runs, so we wouldn't have to wait for benchmarks to finish to only test the framework...? What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once there are two stable BMG machines, this would be a good move. For now, the PVC machine is much, much, much more stable than the BMG one, so I would stay with the current config. Add TODO to consider the change in the future, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're so right. TODO added.

image: ghcr.io/intel/llvm/sycl_ubuntu2404_nightly:latest
image_options: -u 1001 --device=/dev/dri -v /dev/dri/by-path:/dev/dri/by-path --privileged --cap-add SYS_ADMIN
target_devices: 'level_zero:gpu'
tests_selector: benchmarks
benchmark_upload_results: false
benchmark_preset: 'Minimal'
benchmark_dry_run: true
repo_ref: ${{ github.sha }}
toolchain_artifact: ${{ needs.build.outputs.toolchain_artifact }}
toolchain_artifact_filename: ${{ needs.build.outputs.toolchain_artifact_filename }}
toolchain_decompress_command: ${{ needs.build.outputs.toolchain_decompress_command }}
# END benchmark framework builds and runs on PRs path
4 changes: 4 additions & 0 deletions .github/workflows/sycl-windows-precommit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ on:
- llvmspirv_pulldown
- sycl-rel-**
# Do not run builds if changes are only in the following locations
# Note: benchmark-related paths are the same as in sycl-ur-perf-benchmarking.yml (to run there instead)
paths-ignore:
- '.github/ISSUE_TEMPLATE/**'
- '.github/CODEOWNERS'
Expand All @@ -31,6 +32,9 @@ on:
- 'unified-runtime/test/**'
- 'unified-runtime/third_party/**'
- 'unified-runtime/tools/**'
- 'devops/scripts/benchmarks/**'
- 'devops/actions/run-tests/benchmark/**'
- '.github/workflows/sycl-ur-perf-benchmarking.yml'

permissions: read-all

Expand Down
Loading
Loading