-
Notifications
You must be signed in to change notification settings - Fork 76
[device-libs][comgr] - Add gfx1250 and gfx1251 support #512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…op's Python bindings (llvm#166134) Expose missing boolean arguments in `VectorizeChildrenAndApplyPatternsOp` Python bindings.
Follow-up to bcb3d2f. Even though 32-bit win/asan is not well supported, we shouldn't drop it without some discussion at least. Also, we probably shouldn't drop the other sanitizers that are gated by COMPILER_RT_BUILD_SANITIZERS. The tests no longer pass after switching to the runtimes build however (I believe that build mode runs more of the tests?), so disable them.
…vm#165317) Upstream EHScope & Cleanup iterators, helpers and operator overloading as a prerequisite for llvm#165158 Issue llvm#154992
Just to get a little more test coverage. Signed-off-by: Nick Sarnie <nick.sarnie@intel.com>
This pass rewrites certain xegpu `CreateNd` and `LoadNd` operations that feeds into `vector.transpose` to more optimal form to improve performance. Specifically, low precision (bitwidth < 32) `LoadNd` ops that feeds into transpose ops are rewritten to i32 loads with a valid transpose layout such that later passes can use the load with transpose HW feature to accelerate such load ops. **Update:** Pass is renamed to `OptimizeBlockLoads ` because later we plan to add the array length optimization into this pass as well. This will break down a larger load (like `32x32xf16`) into more DPAS-favorable array length loads (`32x16xf16` with array length = 2). Both these optmizations require rewriting `CreateNd` and `LoadNd` and it makes sense to have a common pass for both.
…166279) Fixes llvm#164018. The problem is that we're unable to do an implicit conversion sequence on a template deduced argument, so the current vector templates can't reconcile `vector<int, 4>` with `vector<bool, Sz>`. This PR separates the vector templates into size-specific ones, getting rid of the `Sz` deduction and allowing for the implicit conversion to be done.
…9197) The splitMustacheString function was saving StringRefs that were already backed by an arena-allocated string. This was unnecessary work. This change removes the redundant Ctx.Saver.save() call. This optimization provides a small but measurable performance improvement on top of the single-pass tokenizer, most notably reducing branch misses. Metric | Baseline | Optimized | Change -------------- | -------- | --------- | ------- Time (ms) | 35.77 | 35.57 | -0.56% Cycles | 35.16M | 34.91M | -0.71% Instructions | 85.77M | 85.54M | -0.27% Branch Misses | 113.9K | 111.9K | -1.76% Cache Misses | 237.7K | 242.1K | +1.85%
The WMMA MI(s) are missing the isConvergent flag. This causes incorrect behavior in passes like machine-sink, where WMMA instructions get sunk into divergent branches. This patch fixes the issue by setting the isConvergent flag to 1 in the VOP3PInstructions.td file.
…166436) This fixes register information handling for the `preserve_most` and `preserve_all` calling conventions on Windows ARM64. The root issue was cascading `if` statements whose behavior depended on their order. This patch makes the minimal, tactical change needed for Swift’s two calling conventions, unblocking current work. A broader refactor to remove the ordering dependency is still desired and will follow in a subsequent PR.
…vm#166233) Merging the latches of loops may affect the dispositions, so they should be forgotten after the merge. This patch fixed the crash in loop fusion [llvm#164082](llvm#164082).
… dynamic" (llvm#166448) Reverts llvm#157330 The original revision introduces a bug in `isGuaranteedCollapsible`. The `memref<3x3x1x96xf32, strided<[288, 96, 96, 1], offset: 864>>` is no longer collapsable with the change. The revision reverts the change to bring back correct behavior. `stride` should be computed as `96` like the old behavior in the failed iteration. https://github.com/llvm/llvm-project/blob/92a1eb37122fa24e3045fbabdea2bf87127cace5/mlir/lib/Dialect/MemRef/IR/MemRefOps.cpp#L2597-L2605
Otherwise we might end up with undefined register uses. Copying implicit uses can cause problems where a register is both defined and used in the same LDP, so I have not tried to add them here. Fixes llvm#164230
… LTO (llvm#135059) The optimization records YAML files generated by Clang's LTO pipeline are named "\*.opt.ld.yaml" rather than "\*.opt.yaml". This patch adds that pattern into the search list of `find_opt_files` as well.
This will fix some symbolication failures on arm64e machines when the symbolicator passes the (wrong) architecture string to atos.
In [1], the symbol __bpf_trap (macro BPF_TRAP) is removed if it is not used in the code. In the discussion in [1], it is found that the branch "if (Op.isSymbol())" is actually always false. Remove it to avoid confusion. [1] llvm#166003
…m#166288) We're only supporting Clang 20+ and Apple Clang 17 now, where these builtins are universally implemented.
…#159198) The splitMustacheString function previously used a loop of StringRef::split and StringRef::trim. This was inefficient as it scanned each segment of the accessor string multiple times. This change introduces a custom splitAndTrim function that performs both operations in a single pass over the string, reducing redundant work and improving performance, most notably in the number of CPU cycles executed. | Metric | Baseline | Optimized | Change | | --- | --- | --- | --- | | Time (ms) | 35\.57 | 35\.36 | \-0.59% | | Cycles | 34\.91M | 34\.26M | \-1.86% | | Instructions | 85\.54M | 85\.24M | \-0.35% | | Branch Misses | 111\.9K | 112\.2K | \+0.27% | | Cache Misses | 242\.1K | 239\.9K | \-0.91% |
Implements chown and getgid per the POSIX specification and adds corresponding unit tests. getgid is added as it is required by the chown unit tests. This PR will address llvm#165785 Co-authored-by: shubh@DOE <shubhp@mbm3a24.local>
__clc_cbrt functions are declared in clc_cbrt.inc. Rename to .h for consistency with other headers.
llvm#159474 Resubmitting llvm#162876 with fixes as it broke some buildbots: - Fix comparisons of integer expressions of different signedness - Not check for specific errnos in tests, as they might not be available on all platforms
In both ScheduleDAGInstrs and MachineScheduler, we call `BufferSize = 0` as _reserved_ and `BufferSize = 1` as _unbuffered_. This convention is stem from the fact that we set `SUnit::hasReservedResource` to true when any of the SUnit's consumed resources has BufferSize equal to zero; set `SUnit::isUnbuffered` to true when any of its consumed resources has BufferSize equal to one. However, `SchedBoundary::isUnbufferedGroup` doesn't really follow this convention: it returns true when the resource in question is a `ProcResGroup` and its BufferSize equals to **zero** rather than one. This could be really confusing for the reader. This patch renames this function to `isReservedGroup` in aligned with the convention mentioned above. NFC.
…6273) As a follow-up to PR#165841, this change addresses `prof_md` metadata loss in AtomicExpandPass when lowering `atomicrmw xchg` to a Load-Linked/Store-Exclusive (LL/SC) loop. This path is distinct from the LSE path addressed previously: PR llvm#165841 (and its tests) used `-mtriple=aarch64-linux-gnu`, which targets a modern **ARMv8.1+** architecture. This architecture supports **Large System Extensions (LSE)**, allowing `atomicrmw` to be lowered directly to a more efficient hardware instruction. This PR (and its tests) uses `-mtriple=aarch64--` or `-mtriple=armv8-linux-gnueabihf`. This indicates an `ARMv8.0 or lower architecture that does not support LSE`. On these targets, the pass must fall back to synthesizing a manual LL/SC loop using the `ldaxr/stxr` instruction pair. Similar to previous issue, the new conditional branch was failin to inherit the `prof_md` metadata. Theis PR correctly fix the branch weights to the newly created branch within the LL/SC loop, ensuring profile information is preserved. Co-authored-by: Jin Huang <jingold@google.com>
…ang (llvm#165996) This patch refactors the trap reason demangling logic in `lldb_private::VerboseTrapFrameRecognizer::RecognizeFrame` into a new public function `clang::CodeGen::DemangleTrapReasonInDebugInfo`. There are two reasons for doing this: 1. In a future patch the logic for demangling needs to be used somewhere else in LLDB and thus the logic needs refactoring to avoid duplicating code. 2. The logic for demangling shouldn't really be in LLDB anyway because it's a Clang implementation detail and thus the logic really belongs inside Clang, not LLDB. Unit tests have been added for the new function that demonstrate how to use the new API. The function names recognized by VerboseTrapFrameRecognizer are identical to before. However, this patch isn't NFC because: * The `lldbTarget` library now links against `clangCodeGen` which it didn't previously. * The LLDB logging output is a little different now. The previous code tried to log failures for an invalid regex pattern and for the `Regex::match` API not returning the correct number of matches. These failure conditions are unreachable via unit testing so they have been made assertions failures inside the `DemangleTrapReasonInDebugInfo` implementation instead of trying to log them in LLDB. rdar://163230807
…#164271) The `llvm.experimental.guard` intrinsic is a `call`, so its metadata - if present - would be one value (as per `Verifier::visitProfMetadata`). That wouldn't be a correct `branch_weights` metadata. Likely, `GI->getMetadata(LLVMContext::MD_prof)` was always `nullptr`. We can bias away from deopt instead. Issue llvm#147390
Adding casting function objects as a convenience for expressing e.g. `auto AsDoubles = map_range(RangeOfInts, StaticCastTo<double>)`
…lvm#166571) Closes [llvm#157291](llvm#157291) --------- Co-authored-by: Victor Chernyakin <chernyakin.victor.j@outlook.com>
This reverts commit 9f5811e. It looks like this caused some test failures: 1. https://lab.llvm.org/buildbot/#/builders/51/builds/26529 2. https://lab.llvm.org/buildbot/#/builders/198/builds/9462
Refactor intrinsic call handling in SelectionDAGBuilder and IRTranslator to prepare the addition of intrinsic support to the callbr instruction, which should then share code with the handling of the normal call instruction.
…0095) When the SPE Previous Branch Target address (FEAT_SPE_PBT) feature is available, an SPE sample by combining this PBT feature, has two entries. Arm SPE records SRC/DEST addresses of the latest sampled branch operation, and it stores into the first entry. PBT records the target address of most recently taken branch in program order before the sampled operation, it places into the second entry. They are formed a chain of two consecutive branches. Where: - The previous branch operation (PBT) is always taken. - In SPE entry, the current source branch (SRC) may be either fall-through or taken, and the target address (DEST) of the recorded branch operation is always what was architecturally executed. However PBT doesn't provide as much information as SPE does. It lacks those information such as the address of source branch, branch type, and prediction bit. These information are always filled with zero in PBT entry. Therefore Bolt cannot evaluate the prediction, and source branch fields, it leaves them zero during the aggregation process. Tests includes a fully expanded example.
Rename onlyFirst(Lane|Part)Used to usesFirst(Lane|Part)Only, in line with usesScalars, for clarity.
This PR makes the `fileScopeAsmDecl` matcher public.
…est_checks.py." (llvm#164965) (llvm#166575) This change enables update_llc_test_checks.py to automatically generate MIR checks for RUN lines that use `-stop-before` or `-stop-after` flags allowing tests to verify intermediate compilation stages (e.g., after instruction selection but before peephole optimizations) alongside the final assembly output. If `-debug-only` flag is present in the run line it's considered as the main point of interest for testing and stop flags above are ignored (that is no MIR checks are generated). This resulted from the scenario, when I needed to test two instruction matching patterns where the later pattern in the peepholer reverts the earlier pattern in the instruction selector and distinguish it from the case when the earlier pattern didn't worked at all. Initially created by Claude Sonnet 4.5 it was improved later to handle conflicts in MIR <-> ASM prefixes and formatting.
This ports rather useful functionality that was already available in the Translator, and was mostly implemented in the BE. Today, if we encounter an unknown intrinsic, we pipe it through and hope for the best, which in practice yields either obtuse ISEL errors, or potentially impossible to use SPIR-V. With this change, if instructed via a newly added `--spv-allow-unknown-intrinsics` flag, we emit allowed intrinsics as calls to extern (import) functions. The test is also mostly lifted from the Translator.
This patch enables the FPU in Arm startup code, which is required to run tests on Arm configurations with hardware floating-point support.
After the original API change to DefaultTimingManager::setOutput() (see 362aa43), users are forced to provide their own implementation of OutputStrategy. However, default MLIR implementations are usually sufficient. Expose Text and Json strategies via factory-like method to avoid the problem in downstream projects.
By rejecting them. Fixes llvm#165555
…unknown offset load/stores (llvm#166752) llvm#166337 replaces large (illegal type) loads/stores with a smaller i32 load/store based off the demanded shifted bits. As these shifts are non-constant we can't regenerate the PointerInfo data with a fixed offset, so we need to discard the data entirely. Fixes llvm#166744 - post-ra has to reconstruct dependencies after the chains have been stripped and uses pointer info instead - which resulted in some loads being rescheduled earlier than the dependent store as it was thought they didn't alias
| if: github.event.pull_request.draft == false | ||
| runs-on: | ||
| group: compiler-generic-runners | ||
| env: | ||
| svc_acc_org_secret: ${{secrets.CI_GITHUB_TOKEN}} | ||
| input_sha: ${{ github.event.pull_request.head.sha != '' && github.event.pull_request.head.sha || github.sha }} | ||
| input_pr_num: ${{ github.event.pull_request.number != '' && github.event.pull_request.number || 0 }} | ||
| input_pr_url: ${{ github.event.pull_request.html_url != '' && github.event.pull_request.html_url || '' }} | ||
| input_pr_title: ${{ github.event.pull_request.title != '' && github.event.pull_request.title || '' }} | ||
| # set the pipeline name here based on branch name | ||
| pipeline_name: ${{secrets.CI_JENKINS_JOB_NAME}} | ||
| JENKINS_URL: ${{secrets.CI_JENKINS_URL}} | ||
| CONTAINER_IMAGE: ${{ secrets.JENKINS_TRIGGER_DOCKER_IMAGE }} | ||
|
|
||
| # Steps represent a sequence of tasks that will be executed as part of the job | ||
| steps: | ||
|
|
||
| # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it | ||
| - name: Set environment variable for container image | ||
| run: | | ||
| echo "CONTAINER_IMAGE=${{ secrets.JENKINS_TRIGGER_DOCKER_IMAGE }}" >> $GITHUB_ENV | ||
| echo "CONTAINER_NAME=my_container_${{ github.run_id }}" >> $GITHUB_ENV | ||
|
|
||
|
|
||
| - name: Pull container image | ||
| run: docker pull "${{env.CONTAINER_IMAGE}}" | ||
|
|
||
|
|
||
| - name: Run container | ||
| run: | | ||
| docker run -d --name "${{env.CONTAINER_NAME}}" $CONTAINER_IMAGE sleep infinity | ||
| #docker exec "${{env.CONTAINER_NAME}}" /bin/bash -c "git clone ${{secrets.CI_UTILS_REPO}} ." | ||
| docker exec "${{env.CONTAINER_NAME}}" /bin/bash -c "echo 'Running commands inside the container'" | ||
|
|
||
| - name: Escape pull request title | ||
| run: | | ||
| import json | ||
| import os | ||
| import shlex | ||
| with open('${{ github.event_path }}') as fh: | ||
| event = json.load(fh) | ||
| escaped = event['pull_request']['title'] | ||
| with open(os.environ['GITHUB_ENV'], 'a') as fh: | ||
| print(f'PR_TITLE={escaped}', file=fh) | ||
| shell: python3 {0} | ||
|
|
||
| - name: Run Jenkins Cancel Script | ||
| env: | ||
| JENKINS_URL: ${{secrets.CI_JENKINS_URL}} | ||
| JENKINS_USER: ${{secrets.CI_JENKINS_USER}} | ||
| JENKINS_API_TOKEN: ${{secrets.CI_JENKINS_TOKEN}} | ||
| JENKINS_JOB_NAME: ${{secrets.CI_JENKINS_JOB_NAME}} | ||
| PR_NUMBER: ${{ github.event.pull_request.number }} | ||
| COMMIT_HASH: ${{ github.event.after }} | ||
| run: | | ||
| docker exec -e JENKINS_JOB_NAME=${{secrets.CI_JENKINS_JOB_NAME}} -e PR_NUMBER=${{ github.event.pull_request.number }} -e COMMIT_HASH=${{ github.event.after }} -e JENKINS_URL=${{secrets.CI_JENKINS_URL}} -e JENKINS_USER=${{secrets.CI_JENKINS_USER}} -e JENKINS_API_TOKEN=${{secrets.CI_JENKINS_TOKEN}} "${{env.CONTAINER_NAME}}" /bin/bash -c "PYTHONHTTPSVERIFY=0 python3 cancel_previous_build.py" | ||
|
|
||
|
|
||
| # Runs a set of commands using the runners shell | ||
| - name: Getting Event Details | ||
| run: | | ||
| echo $(pwd) | ||
| echo $GITHUB_ENV | ||
| echo $GITHUB_REPOSITORY | ||
| echo $GITHUB_SERVER_URL | ||
| echo "GITHUB_SHA is: $GITHUB_SHA" | ||
| echo "GITHUB_WORKFLOW_SHA is: $GITHUB_WORKFLOW_SHA" | ||
| echo "GITHUB_BASE_REF is: $GITHUB_BASE_REF" | ||
| echo "GITHUB_REF_NAME is: $GITHUB_REF_NAME" | ||
| echo "github.event.pull_request.id is: ${{github.event.pull_request.id}}" | ||
| echo "github.event.pull_request.html_url is: ${{github.event.pull_request.html_url}}" | ||
| echo "github.event.pull_request.number is: ${{github.event.pull_request.number}}" | ||
| echo "github.event.pull_request.url is: ${{github.event.pull_request.url}}" | ||
| echo "github.event.pull_request.issue_url is: ${{github.event.pull_request.issue_url}}" | ||
| echo "github.event.pull_request.head.sha is: ${{github.event.pull_request.head.sha}}" | ||
| echo "github.event.pull_request.base.ref is: ${{github.event.pull_request.base.ref}}" | ||
| echo "github.event.pull_request.merge_commit_sha is: ${{github.event.pull_request.merge_commit_sha}}" | ||
| echo "github.event.pull_request is: ${{github.event.pull_request}}" | ||
|
|
||
|
|
||
| - name: Trigger Jenkins Pipeline | ||
| if: steps.check_changes.outcome != 'failure' | ||
| run: | | ||
| echo "--Running jenkins_api.py with input sha - $input_sha for pull request - $input_pr_url" | ||
| docker exec -e GITHUB_REPOSITORY="$GITHUB_REPOSITORY" -e svc_acc_org_secret="$svc_acc_org_secret" -e input_sha="$input_sha" -e input_pr_url="$input_pr_url" -e pipeline_name="$pipeline_name" \ | ||
| -e input_pr_num="$input_pr_num" -e PR_TITLE="$PR_TITLE" -e JENKINS_URL="$JENKINS_URL" -e GITHUB_PAT="$svc_acc_org_secret" "${{env.CONTAINER_NAME}}" \ | ||
| /bin/bash -c 'echo \"PR NUM: "$input_pr_num"\" && PYTHONHTTPSVERIFY=0 python3 jenkins_api.py -s \"${JENKINS_URL}\" -jn "$pipeline_name" -ghr "$GITHUB_REPOSITORY" -ghsha "$input_sha" -ghprn "$input_pr_num" -ghpru "$input_pr_url" -ghprt "$PR_TITLE" -ghpat="$svc_acc_org_secret"' | ||
|
|
||
| - name: Stop and remove container | ||
| if: always() | ||
| run: | | ||
| docker stop "${{env.CONTAINER_NAME}}" | ||
| docker rm "${{env.CONTAINER_NAME}}" |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
To fix the problem, add a permissions: block at the root level of the workflow to explicitly limit the default token's privileges. For workflows that only need to read repository contents (common for CI pipelines), the minimal specification is permissions: contents: read. If any step requires additional capabilities (e.g., updating PRs, writing checks), those can be added as needed, but from the code shown, read-only is likely sufficient. The permissions: key should be placed right after the workflow name: and before on:. No changes are needed to the jobs or steps unless a job needs specific elevated rights.
Implementation:
- Edit
.github/workflows/PSDB-amd-staging.yml - Insert
after the
permissions: contents: read
name:line (line 1), and beforeon:(line 4).
-
Copy modified lines R2-R3
| @@ -1,4 +1,6 @@ | ||
| name: Compiler CI PSDB trigger on amd-staging branch | ||
| permissions: | ||
| contents: read | ||
|
|
||
| # Controls when the workflow will run | ||
| on: |
| if: github.event.pull_request.draft == false | ||
| runs-on: | ||
| group: compiler-generic-runners | ||
| env: | ||
| PR_SHA: ${{ github.event.pull_request.head.sha != '' && github.event.pull_request.head.sha || github.sha }} | ||
| PR_NUMBER: ${{ github.event.pull_request.number != '' && github.event.pull_request.number || 0 }} | ||
| PR_URL: ${{ github.event.pull_request.html_url != '' && github.event.pull_request.html_url || '' }} | ||
| PR_TITLE: ${{ github.event.pull_request.title != '' && github.event.pull_request.title || '' }} | ||
| BASE_BRANCH: ${{ github.event.pull_request.base.ref != '' && github.event.pull_request.base.ref || '' }} | ||
| GITHUB_TOKEN: ${{secrets.CI_GITHUB_TOKEN}} | ||
|
|
||
| steps: | ||
| # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it | ||
| - name: Set environment variable for container image | ||
| run: | | ||
| echo "CONTAINER_IMAGE=${{ secrets.BUILDBOT_DOCKER_IMAGE }}" >> $GITHUB_ENV | ||
| echo "CONTAINER_NAME=my_container_${{ github.run_id }}" >> $GITHUB_ENV | ||
|
|
||
| - name: Pull container image | ||
| run: docker pull "${{env.CONTAINER_IMAGE}}" | ||
|
|
||
| - name: Run container | ||
| run: | | ||
| docker run -d --name "${{env.CONTAINER_NAME}}" $CONTAINER_IMAGE sleep infinity | ||
| docker exec "${{env.CONTAINER_NAME}}" /bin/bash -c "echo 'Running commands inside the container'" | ||
|
|
||
| - name: Escape pull request title | ||
| run: | | ||
| import json | ||
| import os | ||
| import shlex | ||
| with open('${{ github.event_path }}') as fh: | ||
| event = json.load(fh) | ||
| escaped = event['pull_request']['title'] | ||
| with open(os.environ['GITHUB_ENV'], 'a') as fh: | ||
| print(f'PR_TITLE={escaped}', file=fh) | ||
| shell: python3 {0} | ||
|
|
||
| - name: Trigger Buildbot Build | ||
| run: | | ||
| echo "${{ secrets.BUILDBOT_HOST }}:${{ secrets.BUILDBOT_WORKER_PORT }}" | ||
| docker exec -e PR_TITLE="$PR_TITLE" "${{env.CONTAINER_NAME}}" /bin/bash -c 'buildbot sendchange -W ${{ secrets.BUILDBOT_USER }} -a ${{secrets.BUILDBOT_USER}}:${{secrets.BUILDBOT_PWD}} --master="${{ secrets.BUILDBOT_HOST }}:${{ secrets.BUILDBOT_WORKER_PORT }}" --branch=${{ env.BASE_BRANCH }} --revision=${{ env.PR_SHA }} -p PR_NUMBER:${{ env.PR_NUMBER }} -p PR_TITLE:"$PR_TITLE" -p PR_URL:${{ env.PR_URL }} -p SHA:${{ env.PR_SHA }}' | ||
|
|
||
| - name: Set Initial Status to Pending | ||
| run: | | ||
| docker exec -e PR_SHA=$PR_SHA -e GITHUB_TOKEN=$GITHUB_TOKEN "${{env.CONTAINER_NAME}}" /bin/bash -c "python3 -c \" | ||
| import os | ||
| import requests | ||
| GITHUB_TOKEN = os.getenv('GITHUB_TOKEN') | ||
| TARGET_SHA = os.getenv('PR_SHA') | ||
| print('debug', TARGET_SHA) | ||
| api_url = f'https://api.github.com/repos/AMD-Lightning-Internal/llvm-project/statuses/{TARGET_SHA}' | ||
| headers = { | ||
| 'Authorization': f'token {GITHUB_TOKEN}', | ||
| 'Content-Type': 'application/json' | ||
| } | ||
| payload = { | ||
| 'state': 'pending', | ||
| 'context': 'buildbot', | ||
| 'description': 'Build is in queue' | ||
| } | ||
| response = requests.post(api_url, json=payload, headers=headers) | ||
| if response.status_code == 201: | ||
| print('Status set to pending successfully.') | ||
| else: | ||
| print(f'Failed to set status: {response.status_code} {response.text}') | ||
| \"" | ||
|
|
||
| - name: Poll Buildbot build status | ||
| run: | | ||
| python3 -c " | ||
| import os | ||
| import time | ||
| import requests | ||
| GITHUB_TOKEN = os.getenv('GITHUB_TOKEN') | ||
| BUILD_URL = 'http://${{ secrets.BUILDBOT_HOST }}:${{ secrets.BUILDBOT_MASTER_PORT }}/api/v2/builds' | ||
| TARGET_SHA = os.getenv('PR_SHA') | ||
| print('debug', TARGET_SHA) | ||
| MAX_RETRIES = 10 | ||
| RETRY_INTERVAL = 30 # seconds | ||
|
|
||
| def get_build_properties(build_id): | ||
| build_properties_url = f'http://${{ secrets.BUILDBOT_HOST }}:${{ secrets.BUILDBOT_MASTER_PORT }}/api/v2/builds/{build_id}/properties' | ||
| response = requests.get(build_properties_url, headers={'Accept': 'application/json', 'Authorization': f'token {GITHUB_TOKEN}'}) | ||
| return response.json() | ||
|
|
||
| for i in range(MAX_RETRIES): | ||
| response = requests.get(BUILD_URL, headers={'Accept': 'application/json'}) | ||
| response_json = response.json() | ||
| print(f'Attempt {i + 1}: Buildbot response:', response_json) | ||
|
|
||
| # Check if any build has the target SHA | ||
| builds = response_json.get('builds', []) | ||
| print (builds) | ||
| build_with_sha = None | ||
| for build in builds: | ||
| build_id = build['buildid'] | ||
| properties = get_build_properties(build_id) | ||
| #print(properties) | ||
| #prop = properties.get('revision', []) | ||
|
|
||
| if 'properties' in properties: | ||
| print (properties['properties']) | ||
| if 'revision' in properties['properties'][0]: | ||
| print(properties['properties'][0]) | ||
| if 'revision' in properties['properties'][0] and properties['properties'][0]['revision'] [0] == TARGET_SHA: | ||
| build_with_sha = build | ||
| break | ||
|
|
||
| if build_with_sha: | ||
| print('Build started successfully for SHA:', TARGET_SHA) | ||
| break | ||
| else: | ||
| print('Build for SHA not started yet, retrying in', RETRY_INTERVAL, 'seconds') | ||
| time.sleep(RETRY_INTERVAL) | ||
| else: | ||
| print('Build did not start for SHA:', TARGET_SHA, 'after maximum retries') | ||
| exit(1) | ||
| " | ||
|
|
||
| - name: Stop and remove container | ||
| if: always() | ||
| run: | | ||
| docker stop "${{env.CONTAINER_NAME}}" | ||
| docker rm "${{env.CONTAINER_NAME}}" |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
The fix is to add a permissions: block at the top-level of the workflow (before or after on:), or at the job level if only one job, that explicitly grants only the minimum privileges required. Specifically, contents: read (for accessing code and minimal repo info) and statuses: write (for setting commit status via the API), and nothing more. This block can be added directly after the name: and on: section. No changes to the logic of the steps are required, and no new imports are needed. Only the YAML workflow file is updated—no code is touched outside this file.
-
Copy modified lines R8-R10
| @@ -5,6 +5,9 @@ | ||
| branches: [amd-debug] | ||
| types: [opened, reopened, synchronize, ready_for_review] | ||
|
|
||
| permissions: | ||
| contents: read | ||
| statuses: write | ||
|
|
||
| jobs: | ||
| trigger-build: |
| runs-on: | ||
| group: compiler-generic-runners | ||
|
|
||
| steps: | ||
| - name: Set environment variable for container image | ||
| run: | | ||
| echo "CONTAINER_IMAGE=${{ secrets.JENKINS_TRIGGER_DOCKER_IMAGE }}" >> $GITHUB_ENV | ||
| echo "CONTAINER_NAME=my_container_${{ github.run_id }}" >> $GITHUB_ENV | ||
|
|
||
| - name: Pull container image | ||
| run: docker pull "${{env.CONTAINER_IMAGE}}" | ||
|
|
||
| - name: Run container | ||
| run: | | ||
| docker run -d --name "${{env.CONTAINER_NAME}}" $CONTAINER_IMAGE sleep infinity | ||
| docker exec "${{env.CONTAINER_NAME}}" /bin/bash -c "echo 'Running commands inside the container'" | ||
|
|
||
| - name: Trigger compute-rocm-dkms-afar job | ||
| run: | | ||
| docker exec "${{env.CONTAINER_NAME}}" /bin/bash -c "python -c \" | ||
| import requests | ||
| import time | ||
| from requests.auth import HTTPBasicAuth | ||
|
|
||
| jenkins_user = '${{ secrets.CI_JENKINS_USER }}' | ||
| jenkins_token = '${{ secrets.ROCM_JENKINS_CI_TOKEN }}' | ||
| jenkins_host = '${{ secrets.ROCM_JENKINS_HOST }}' | ||
| jenkins_job = '${{ secrets.ROCM_JENKINS_OSDB_JOB }}' | ||
|
|
||
| jenkins_url = f'{jenkins_host}/job/{jenkins_job}/buildWithParameters' | ||
|
|
||
| response = requests.post(jenkins_url, auth=HTTPBasicAuth(jenkins_user, jenkins_token)) | ||
|
|
||
| if response.status_code == 201: | ||
| print('Jenkins job triggered successfully!') | ||
| queue_url = response.headers.get('Location') | ||
| if queue_url: | ||
| print(f'Queue URL: {queue_url}') | ||
| print(f'Getting build URL(max 5 attempts with 10seconds interval)...') | ||
| # Poll the queue item to get the build number, limited to 5 attempts | ||
| max_attempts = 5 | ||
| attempts = 0 | ||
| while attempts < max_attempts: | ||
| queue_response = requests.get(queue_url + 'api/json', auth=HTTPBasicAuth(jenkins_user, jenkins_token)) | ||
| queue_data = queue_response.json() | ||
| if 'executable' in queue_data: | ||
| build_number = queue_data['executable']['number'] | ||
| build_url = f'{jenkins_host}/job/{jenkins_job}/{build_number}/' | ||
| print(f'Build URL: {build_url}') | ||
| break | ||
| attempts += 1 | ||
| time.sleep(10) # Wait for 10 seconds before polling again | ||
| else: | ||
| print('Exceeded maximum attempts to get the build URL. The trigger happened, so not failing the workflow') | ||
| else: | ||
| print('Build URL not found in the response headers.') | ||
|
|
||
| elif response.status_code == 200: | ||
| print('Request was successful, but check the response content for details.') | ||
| print(response.text) | ||
| else: | ||
| print(f'Failed to trigger Jenkins job. Status code: {response.status_code}') | ||
| \"" | ||
|
|
||
| - name: Stop and remove container | ||
| if: always() | ||
| run: | | ||
| docker stop "${{env.CONTAINER_NAME}}" | ||
| docker rm "${{env.CONTAINER_NAME}}" |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 15 days ago
To fix the issue, set an explicit permissions block in the workflow YAML file. Since the workflow does not interact with the repository (does not push commits, create releases, modify issues or PRs), you can set the minimal permission that is recommended: contents: read. Place the block either at the top level (before the jobs: key, to apply to all jobs), or inside the trigger_jenkins job (to restrict just that job). The best practice is to set it at the root level unless you need different permissions for different jobs.
Specifically, in .github/workflows/compute-rocm-dkmd-afar-trigger.yml, insert:
permissions:
contents: readbetween the on: block and the jobs: block (i.e. after line 8 and before line 9). No new methods or imports are required since this is a configuration change.
-
Copy modified lines R9-R12
| @@ -6,6 +6,10 @@ | ||
| - amd-staging | ||
| workflow_dispatch: # This allows manual triggering of the workflow | ||
|
|
||
|
|
||
| permissions: | ||
| contents: read | ||
|
|
||
| jobs: | ||
| trigger_jenkins: | ||
| runs-on: |
No description provided.