[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963
[AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5#963
Conversation
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
2 similar comments
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
|
Thanks for the contribution! For vLLM & SGLang, please ensure that your recipes is similar to the official vLLM recipes and/or the SGLang cookbook If it is not, please create a PR first before we can merge your PR into the master branch. Let's ensure that the documentation is first class such that the entire ML community can benefit from your hard work! Thank you |
….5 FP8 Signed-off-by: seungrokj <seungrok.jung@amd.com>
| --model $MODEL \ | ||
| --server-port $PORT \ | ||
| -tp $TP \ | ||
| --kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP \ | ||
| --trust-remote-code \ | ||
| > $SERVER_LOG 2>&1 & | ||
|
|
||
| SERVER_PID=$! | ||
|
|
||
| # Wait for server to be ready |
There was a problem hiding this comment.
🔴 All three new non-MTP ATOM benchmark scripts (glm5_fp8_mi355x_atom.sh, kimik2.5_fp4_mi355x_atom.sh, minimaxm2.5_fp8_mi355x_atom.sh) are missing the BLOCK_SIZE=${BLOCK_SIZE:-16} variable and --block-size $BLOCK_SIZE flag present in all other non-MTP ATOM scripts. Without this, the ATOM server uses its framework default block size, which may differ from the tuned value of 16 and could cause performance degradation or OOM at high concurrency.
Extended reasoning...
What the bug is and how it manifests
All three new non-MTP ATOM benchmark scripts introduced in this PR are missing the --block-size parameter when launching the ATOM inference server. The existing non-MTP ATOM scripts (dsr1_fp8_mi355x_atom.sh, dsr1_fp4_mi355x_atom.sh, gptoss_fp4_mi355x_atom.sh) all set BLOCK_SIZE=${BLOCK_SIZE:-16} and pass --block-size $BLOCK_SIZE to the server. The three new scripts omit this entirely, so the ATOM framework will use whatever block size it defaults to internally.
The specific code path that triggers it
In all three new scripts (e.g. glm5_fp8_mi355x_atom.sh lines 46–55), the server launch invocation is:
python3 -m atom.entrypoints.openai_server --model $MODEL --server-port $PORT -tp $TP --kv_cache_dtype fp8 $CALCULATED_MAX_MODEL_LEN $EP --trust-remote-code > $SERVER_LOG 2>&1 &Compare with dsr1_fp8_mi355x_atom.sh (line 45, 51):
BLOCK_SIZE=${BLOCK_SIZE:-16}
...
--block-size $BLOCK_SIZE > $SERVER_LOG 2>&1 &Why existing code doesn't prevent it
There is no default --block-size guard in the shared benchmark_lib.sh — each individual script is responsible for passing this flag. The three new scripts simply don't include it.
Addressing the refutation
One verifier noted that the MTP ATOM scripts (dsr1_fp4_mi355x_atom_mtp.sh and dsr1_fp8_mi355x_atom_mtp.sh) added in PR #673 also omit --block-size. However, those scripts use --method mtp for speculative decoding, which is architecturally distinct from standard inference and may have different (or conflicting) block-size requirements. The new scripts in this PR are standard (non-MTP) inference scripts and should logically follow the non-MTP pattern, not the MTP pattern. The omission in MTP scripts appears intentional for MTP-specific reasons, not a general deprecation of the flag.
Impact
ATOM's block size directly controls KV cache page allocation. If the framework default differs from 16, it can cause increased memory fragmentation, reduced KV cache utilization at high concurrency (conc-end: 128 or 256 in these configs), or OOM errors during sweep benchmarks. Given that the concurrency ranges here (up to 256 for minimaxm2.5) exceed those in the original ATOM scripts, the risk is higher.
Step-by-step proof
- User runs
glm5-fp8-mi355x-atombenchmark with ISL=1024, OSL=8192, TP=8, CONC=128. - Harness calls
glm5_fp8_mi355x_atom.sh— noBLOCK_SIZEvariable is set. - Server launches without
--block-size; ATOM uses its internal default (e.g. 32 or 64). - At CONC=128 with OSL=8192, KV cache demand is high; a larger block size leads to more wasted space per page and potentially OOM, while the tuned value of 16 was established specifically for MI355X ATOM.
- For comparison,
dsr1_fp8_mi355x_atom.shwith the same hardware would launch with--block-size 16and succeed.
How to fix
Add the following two lines to each new script, matching the pattern in the existing non-MTP ATOM scripts:
BLOCK_SIZE=${BLOCK_SIZE:-16}And add --block-size $BLOCK_SIZE to the python3 -m atom.entrypoints.openai_server invocation in all three new scripts.
| description: | ||
| - "New model support on ATOM framework" | ||
| - "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM" | ||
| pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/963 |
There was a problem hiding this comment.
🟡 The perf-changelog.yaml entry added in this PR references the wrong PR number: pr-link points to pull/954, but this is PR #963. PR #954 is a separate merged PR ('kimik2.5 int4 changes from mi355x'). The link should be updated to https://github.com/SemiAnalysisAI/InferenceX/pull/963 to correctly attribute the three ATOM configs (kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, minimaxm2.5-fp8-mi355x-atom) to this PR.
Extended reasoning...
Bug description: The last entry added to perf-changelog.yaml in this PR uses an incorrect pr-link value. The link points to https://github.com/SemiAnalysisAI/InferenceX/pull/954, which is a different, already-merged PR ('mi325x: port kimik2.5 int4 changes from mi355x PR #950', commit cec542a). The current PR being submitted is #963.
Code path: In perf-changelog.yaml at the final entry (around line 1119), the newly added block reads:
- config-keys:
- kimik2.5-fp4-mi355x-atom
- glm5-fp8-mi355x-atom
- minimaxm2.5-fp8-mi355x-atom
description:
- "New model support on ATOM framework"
- "Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 FP8 configs added for MI355X ATOM"
pr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954Why existing code doesn't prevent it: The changelog is a static YAML file with no automated validation of PR numbers against the current PR context. There is no CI check that cross-references the pr-link value with the actual PR number being submitted, making copy-paste errors like this easy to introduce.
Impact: While this is a documentation-only error with no functional impact, the changelog serves as an important audit trail for which PR introduced which configuration. With the wrong link, a future developer investigating kimik2.5-fp4-mi355x-atom, glm5-fp8-mi355x-atom, or minimaxm2.5-fp8-mi355x-atom would be led to PR #954, which is about an entirely different set of changes (MI325X INT4 configs), creating false traceability.
Step-by-step proof:
- This PR is [AMD/ROCM] ATOM support for new models: Kimi-K2.5 FP4, GLM-5 FP8, and MiniMax-M2.5 #963, as confirmed by the PR metadata (
<pr number="963">). - The git log shows commit
cec542awith message 'mi325x: port kimik2.5 int4 changes from mi355x PR [AMD/ROCm] kimik2.5 int4 mi355x: upgrade to vllm-openai-rocm:v0.18.0 #950' — this is PR mi325: update kimi int4 #954. - The last
perf-changelog.yamlentry usespr-link: https://github.com/SemiAnalysisAI/InferenceX/pull/954. - The correct value should be
https://github.com/SemiAnalysisAI/InferenceX/pull/963.
Fix: Change the pr-link in the final changelog entry from .../pull/954 to .../pull/963.
Hi @functionstackx @cquil11
This is a PR to include ATOM framework of the following models.
Docker image will be ready shortly.
cc. @ChuanLi1101 @andyluo7 @chunfangamd
Regards,
Seungrok