Skip to content

Fix: broken/missing 8-bit inference in tiled QMV path#6

Merged
Geramy merged 1 commit intoNripeshN:rocm-supportfrom
soloish90:fix/rocm-qmv-tiled-8bit
Apr 23, 2026
Merged

Fix: broken/missing 8-bit inference in tiled QMV path#6
Geramy merged 1 commit intoNripeshN:rocm-supportfrom
soloish90:fix/rocm-qmv-tiled-8bit

Conversation

@soloish90
Copy link
Copy Markdown

Summary

I was trying different qwen3/3.5/3.6 8bit models and output was garbage characters... did a little digging and i think this simple code can be added. (if so, still only covers 4bit and 8bit paths)

This adds the missing tiled 8-bit ROCm launches for bf16 and fp16 so 8-bit inference uses the correct compute path.

Without these, 8-bit models were not using the tiled fast path correctly and were producing garbage output - both in diagnose and chat. On my system this showed up most clearly with Qwen 8-bit models, where decode behavior was incorrect until the tiled 8-bit path was wired up.

So, before the fix:
- 8bit was going into tiled dispatch
- but 8bit tiled launches were not explicitly wired up
- result: bad behavior / garbage output
After the fix:
- 8bit still goes into tiled dispatch
- but now there are actual 8bit tiled launch cases there
- result: correct output

Tested on

  • Framework Desktop (strix halo)
  • CachyOS
  • ROCm 7.2.2
  • AMD Radeon 8060S / gfx1151

Tested models:

mlx-community/Qwen3.5-27B-8bit
mlx-community/Qwen3.6-27B-8bit
mlx-community/Qwen3-1.7B-8bit

haven't had a chance to try other model families, though i can't imagine why these changes would be wrong.

Add explicit tiled QMV launch cases for 8-bit affine quantization in the
ROCm quantized matmul path.

This fixes 8-bit models being left off the tiled fast path and restores
correct, faster decode behavior for tested Qwen 8-bit models.
@soloish90 soloish90 closed this Apr 23, 2026
@soloish90 soloish90 reopened this Apr 23, 2026
@soloish90 soloish90 closed this Apr 23, 2026
@soloish90 soloish90 reopened this Apr 23, 2026
@soloish90 soloish90 changed the base branch from main to rocm-support April 23, 2026 02:40
@Geramy Geramy merged commit 516b5a1 into NripeshN:rocm-support Apr 23, 2026
7 of 8 checks passed
@Geramy
Copy link
Copy Markdown
Collaborator

Geramy commented Apr 23, 2026

I have merged this, looks good, thank you.

bong-water-water-bong added a commit to bong-water-water-bong/lemon-mlx-engine that referenced this pull request Apr 23, 2026
Extends the chat+HTTP smoke matrix with an 8-bit model so the shared
quantized-matmul dispatch stays covered on every PR. Motivated by
NripeshN/mlx#6, which fixed broken 8-bit tiled QMV launches on ROCm
that produced garbage output on Qwen3.5/3.6/1.7B 8-bit -- a regression
our existing 4-bit-only coverage would not have caught.

GH free runners are CPU (Linux) + Metal (macOS); the ROCm hot-path
still needs self-hosted hardware, but the shared dispatch code lives
on both backends so this is worth running here.

Also bumps the job timeout 15 -> 25 min to absorb the extra ~1.8 GB
cold-cache download, and extends the HF cache key with the new model
id so it gets warm-restored on subsequent runs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants