Fix: broken/missing 8-bit inference in tiled QMV path#6
Merged
Geramy merged 1 commit intoNripeshN:rocm-supportfrom Apr 23, 2026
Merged
Conversation
Add explicit tiled QMV launch cases for 8-bit affine quantization in the ROCm quantized matmul path. This fixes 8-bit models being left off the tiled fast path and restores correct, faster decode behavior for tested Qwen 8-bit models.
Collaborator
|
I have merged this, looks good, thank you. |
bong-water-water-bong
added a commit
to bong-water-water-bong/lemon-mlx-engine
that referenced
this pull request
Apr 23, 2026
Extends the chat+HTTP smoke matrix with an 8-bit model so the shared quantized-matmul dispatch stays covered on every PR. Motivated by NripeshN/mlx#6, which fixed broken 8-bit tiled QMV launches on ROCm that produced garbage output on Qwen3.5/3.6/1.7B 8-bit -- a regression our existing 4-bit-only coverage would not have caught. GH free runners are CPU (Linux) + Metal (macOS); the ROCm hot-path still needs self-hosted hardware, but the shared dispatch code lives on both backends so this is worth running here. Also bumps the job timeout 15 -> 25 min to absorb the extra ~1.8 GB cold-cache download, and extends the HF cache key with the new model id so it gets warm-restored on subsequent runs.
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
I was trying different qwen3/3.5/3.6 8bit models and output was garbage characters... did a little digging and i think this simple code can be added. (if so, still only covers 4bit and 8bit paths)
This adds the missing tiled 8-bit ROCm launches for bf16 and fp16 so 8-bit inference uses the correct compute path.
Without these, 8-bit models were not using the tiled fast path correctly and were producing garbage output - both in diagnose and chat. On my system this showed up most clearly with Qwen 8-bit models, where decode behavior was incorrect until the tiled 8-bit path was wired up.
So, before the fix:
- 8bit was going into tiled dispatch
- but 8bit tiled launches were not explicitly wired up
- result: bad behavior / garbage output
After the fix:
- 8bit still goes into tiled dispatch
- but now there are actual 8bit tiled launch cases there
- result: correct output
Tested on
Tested models:
mlx-community/Qwen3.5-27B-8bit
mlx-community/Qwen3.6-27B-8bit
mlx-community/Qwen3-1.7B-8bit
haven't had a chance to try other model families, though i can't imagine why these changes would be wrong.