Update minimaxm2.5-fp8-mi300x-vllm vLLM image to v0.20.2#1351
Update minimaxm2.5-fp8-mi300x-vllm vLLM image to v0.20.2#1351Klaud-Cold wants to merge 1 commit into
Conversation
…n\nCo-authored-by: Klaud Cold <Klaud-Cold@users.noreply.github.com>
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your single node PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you PR authors are responsible for ensuring that after merging, all GitHub Action jobs fully pass. A lot of the time, failures are just flakes and simply re-running the failed jobs will fix it. If re-running failed jobs is attempted, PR authors are responsible for ensuring it passes. See GitHub's docs on re-running failed jobs: https://docs.github.com/en/actions/how-tos/manage-workflow-runs/re-run-workflows-and-jobs#re-running-failed-jobs-in-a-workflow As a rule of thumb, generally, PR authors should request a review & get a PR approval from the respective companies' CODEOWNERS before requesting a review from core maintainers. If additional help is needed, PR authors can reach out to core maintainers over Slack. |
| pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1310 | ||
|
|
||
| - config-keys: | ||
| - minimaxm2.5-fp8-mi300x-vllm |
There was a problem hiding this comment.
🟡 The new perf-changelog.yaml entry has pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX with a literal XXX placeholder rather than the actual PR number. Please replace XXX with 1351 before merging so the changelog link resolves to this PR instead of 404.
Extended reasoning...
What's wrong
The new entry added to perf-changelog.yaml at the end of the file contains an unresolved placeholder in the pr-link field:
- config-keys:
- minimaxm2.5-fp8-mi300x-vllm
description:
- "Update vLLM ROCm image from v0.16.0 to v0.20.2"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXXThe literal string XXX was left in place where the PR number should be.
Why this matters
Every other entry in perf-changelog.yaml references a real PR number (e.g. /pull/1303, /pull/1304, /pull/1308, /pull/1310). The pr-link is the changelog's way of tracing a config change back to its motivating PR, so consumers (humans browsing the changelog, or any tooling that follows these links) will be sent to https://github.com/SemiAnalysisAI/InferenceX/pull/XXX — a 404. The provenance for this image bump is effectively lost the moment the PR is merged and someone tries to follow the link.
Step-by-step proof
- Open
perf-changelog.yamlat line 2348 (the new block added by this PR). - Inspect the
pr-link:value — it is the verbatim stringhttps://github.com/SemiAnalysisAI/InferenceX/pull/XXX. - Scroll up to any prior entry, e.g. line 2345 (
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1310) or earlier entries pointing to/pull/1303,/pull/1304,/pull/1308— all use real, numeric PR IDs. - This PR is Update minimaxm2.5-fp8-mi300x-vllm vLLM image to v0.20.2 #1351 (per the PR metadata), so the intended value is
/pull/1351. - As shipped, the link resolves to GitHub's "Page not found" rather than to this PR.
Fix
Replace the trailing XXX with 1351:
- pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/XXX
+ pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/1351Severity
Calling this nit: perf-changelog.yaml is documentation/metadata, so there is no runtime impact, no failing CI, and no broken benchmark. But it is a guaranteed-broken link being added by this very PR, and the fix is a one-character-class substitution, so it is well worth catching before merge.
|
see unofficial run visualizer at https://inferencex.semianalysis.com/inference?unofficialRun=25763478614 |
1 similar comment
|
see unofficial run visualizer at https://inferencex.semianalysis.com/inference?unofficialRun=25763478614 |
Summary
minimaxm2.5-fp8-mi300x-vllmimage fromvllm/vllm-openai-rocm:v0.16.0tovllm/vllm-openai-rocm:v0.20.2Ref #1154
Generated with Claude Code