Skip to content

x/mlxrunner: apply config.json per-tensor quant overrides for mixed-precision MoE#15744

Closed
jodagreyhame wants to merge 2 commits intoollama:mainfrom
jodagreyhame:pr2/mlxrunner-quant-config-overrides
Closed

x/mlxrunner: apply config.json per-tensor quant overrides for mixed-precision MoE#15744
jodagreyhame wants to merge 2 commits intoollama:mainfrom
jodagreyhame:pr2/mlxrunner-quant-config-overrides

Conversation

@jodagreyhame
Copy link
Copy Markdown

@jodagreyhame jodagreyhame commented Apr 22, 2026

Summary

Fixes #15746panic: runtime error: index out of range [0] with length 0 in SparseMoE.Forward when running mlx-lm mixed-precision NVFP4 MoE models imported via ollama create --experimental. Concretely: Qwen 3.6 35B-A3B NVFP4 crashes on the first token without this change.

Root cause: mlx-lm stores per-path quantisation overrides in config.json's quantization block, not in the tensor blob __metadata__. Ollama's MLX runner only reads blob metadata, which ollama create fills from the global quant params. The MoE router gate (stored as affine 8-bit, BF16 scales+biases, group_size 64) is therefore fed to the NVFP4 dequant kernel at the global group_size 16, producing a zero-shape output; Argpartition on the zero-shape tensor panics.

This PR makes the runner read config.json's quant block, apply per-path overrides to tensorQuant, and route each linear/embedding layer through the correct dequant kernel.

Stacking and prior art

Depends on the naming-recognition PR (load-time plural aux acceptance). Reviewers who want to look at this PR in isolation can use the base branch; the CI run and the regression guard below both require the naming PR's changes to be present first.

Depends on: #15743

Extends #15409 ("mlx: mixed-precision quant and capability detection improvements"). That PR put the per-tensor-quant-metadata machinery in place on the assumption that overrides live in the blob __metadata__. For mlx-lm-produced imports, overrides actually live in config.json's quantization block and are not round-tripped into blob __metadata__, so the override path introduced by #15409 is never populated for those models. This PR plugs config.json in as the second source.

This PR does not fix #15632 (qwen3.6:35b-a3b-nvfp4 fails to load: layer 0 missing linear attention projections) — that failure is on a different code path (attention-tensor loading, before the quant-metadata code runs).

What changed

TensorQuantInfo carries explicit bits and mode

x/mlxrunner/model/root.go

type TensorQuantInfo struct {
    QuantType string
    GroupSize int
    Bits      int    // NEW — 0 = inherit from QuantType lookup
    Mode      string // NEW — "" = inherit from QuantType lookup
}

Zero-value of the new fields preserves existing behaviour: callers that set only QuantType/GroupSize get the same result as before.

Resolver honours the new fields

x/mlxrunner/model/quant.go — three lines added to TensorQuantParams. When tq.Bits != 0, use it instead of QuantizationParams(tq.QuantType).bits. Same for tq.Mode.

New: config.json quant override parser

x/mlxrunner/model/config_quant.go (new)

func readConfigQuantOverrides(m *manifest.ModelManifest) (
    TensorQuantInfo,             // globals
    map[string]*TensorQuantInfo, // per-path overrides, keyed by <path>.weight
    error,
)
  • Reads config.json via manifest.ReadConfig. Returns zero values + nil error on missing/malformed config (silent fallback).
  • Accepts both "quantization" and "quantization_config" as the top-level key — mlx-lm writes both, depending on version.
  • Scalar children of the block → globals.
  • Object children whose keys are dotted module paths → per-path overrides keyed as <path>.weight.
  • Mode rule: if override specifies mode, use it; if omitted, use "affine". This matches mlx.nn.Linear.to_quantized's default (verified by reading the MLX source).

Root.Open merges overrides over blob metadata

x/mlxrunner/model/root.go

The blob scan populates tensorQuant from __metadata__ as before. Afterwards, readConfigQuantOverrides runs and the per-path entries overwrite blob entries for the paths config.json specifies.

This direction is deliberate: ollama create fills every blob's __metadata__ from the config.json globals, not from per-path overrides. So for any path config.json overrides, the blob metadata entry is definitionally wrong. Paths not overridden by config.json still come from blob metadata (which is correct for those paths). Ollama-registry-published models don't ship a quantization block, so the override map is empty for them and their behaviour is untouched.

Open(modelName) keeps its existing public signature; internal work moves to openFromManifest(m) so tests can inject a fake manifest without touching the filesystem model store.

A note on model vs architecture names

The repro model is Qwen 3.6 35B-A3B (the user-facing release name). Its Python/HF architecture class is still Qwen3_5MoeForConditionalGeneration, inherited unchanged from Qwen 3.5 — which is why Ollama's source for it sits in x/models/qwen3_5/. The crash in SparseMoE.Forward at x/models/qwen3_5/qwen3_5.go:~1320 is the symptom; this PR fixes it upstream in x/mlxrunner/model so no x/models/qwen3_5/ code is touched.

Tests

All new tests live in x/mlxrunner/model/:

In config_quant_test.go (new file):

  • TestReadConfigQuantOverrides_NoConfig — manifest without config.json → zero values.
  • TestReadConfigQuantOverrides_NoQuantizationBlock — config.json without quantization block → zero values.
  • TestReadConfigQuantOverrides_FlatQuantization — globals populated, no per-path.
  • TestReadConfigQuantOverrides_PerPathOverrideWithExplicitMode — explicit mode respected.
  • TestReadConfigQuantOverrides_PerPathOverrideOmittedModeIsAffine — the Qwen 3.6 path; override without mode coerces to "affine".
  • TestReadConfigQuantOverrides_QuantizationConfigAliasAccepted — both top-level keys recognised.
  • TestReadConfigQuantOverrides_MultipleOverrides — several paths captured.
  • TestReadConfigQuantOverrides_MalformedJSON — silent fallback.
  • TestRoot_OpenPopulatesFromConfigAndBlobsregression guard for the metadata/override path that caused the panic: fake manifest with a global NVFP4 default plus a per-path affine-g64-b8 gate override; root.TensorQuant("…mlp.gate.weight") returns the override, not the global. (End-to-end "model runs without panic" verification is the manual step in the test plan.)

In quant_test.go:

  • TestTensorQuantParams_ExplicitBitsMode — new fields override QuantizationParams lookup.
  • TestResolveLinearQuantParams_PerTensorOverridesGlobalViaBitsMode — end-to-end resolver with the new fields.
  • TestResolveLinearQuantParams_InferenceSkippedWhenAffineFromTensorWithValidParams — guards against shape inference overriding a valid per-tensor entry.

Test plan

  • go test ./x/mlxrunner/model/... on macOS arm64 — all green (targeted + regression).
  • go test ./x/mlxrunner/model/... on Linux / CI — MLX cases skip, pure-Go cases pass.
  • Manual: ollama run on a vanilla Ollama-registry-published NVFP4 model — no regression.
  • Manual: ollama create --experimental + ollama run on an mlx-lm mixed-precision NVFP4 MoE model (Qwen 3.6 35B-A3B NVFP4 is a straightforward repro) — tokens generated without panic.

Files touched

x/mlxrunner/model/config_quant.go         new
x/mlxrunner/model/config_quant_test.go    new
x/mlxrunner/model/quant.go                +6 lines
x/mlxrunner/model/quant_test.go           +~100 lines (3 new tests)
x/mlxrunner/model/root.go                 +~30 lines (struct fields, Open refactor, merge loop)

Risk

  • Ollama-registry-published modelsreadConfigQuantOverrides returns empty when config.json has no quantization block. Their behaviour is unchanged. Covered by TestReadConfigQuantOverrides_NoQuantizationBlock.
  • Merge direction — overrides win for the paths they specify, which is the intended correction. Non-overridden paths still come from blob metadata.
  • MLX kernel supportmlx.QuantizedMatmul(mode="affine", gs=64, b=8) with BF16 scales+biases was verified against the MLX version linked in the build with a fabricated tensor of the exact on-disk shape; no C-side changes needed.

Known follow-ups

  • Vision tower wiring for Qwen3_5MoeForConditionalGeneration is still absent in the runner (images go through mlx-vlm only). Separate, larger work.
  • The override parser currently only supports dotted-path string keys at the top of the quantisation block. Future mlx-lm versions that emit true-nested dict structures would need a parser extension; trivially forward-compatible.

Accept the sibling-plural aux naming (<module>.scales / <module>.biases)
that mlx-lm, LM Studio, and any tool using mx.nn.quantize emit natively,
alongside Ollama's existing dot-child singular naming (<weight>.scale /
<weight>.bias). Without this change, safetensors imported via "ollama
create --experimental" that retain the plural form silently fall through
to dense *nn.Linear and load U32-packed weights as raw float matrices.

Changes are strictly about load-time name recognition:

  - MakeLinearLayer / MakeEmbeddingLayer try the singular key first,
    fall back to the plural sibling form.
  - mainTensorNames filter excludes both singular and plural aux
    suffixes so plural keys aren't misclassified as main tensors.
  - normaliseAuxNames (extracted from loadTensorsFromManifest) remaps
    both conventions to the canonical <base>.weight_scale /
    <base>.weight_qbias form so downstream consumers see a single
    naming convention regardless of the source blob.

Quant-parameter resolution and forward-pass code are unchanged.
Fix SparseMoE.Forward panic on mlx-lm mixed-precision NVFP4 MoE models
(e.g. Qwen 3.6 35B-A3B NVFP4) imported via "ollama create --experimental".

Root cause: mlx-lm stores per-path quantisation overrides in config.json's
"quantization" block, not in the tensor blob __metadata__. The runner only
read blob metadata, which ollama create fills from the config.json
*globals*. The MoE router gate (affine 8-bit, BF16 scales+biases,
group_size 64) was therefore fed to the NVFP4 dequant kernel at the global
group_size 16, producing a zero-shape output; Argpartition panicked.

Changes:

  - TensorQuantInfo gains Bits and Mode fields. Zero values preserve the
    previous QuantType-lookup behaviour.

  - TensorQuantParams honours the new fields when non-zero.

  - New readConfigQuantOverrides parses config.json's quantization or
    quantization_config block. Per-path overrides omitting "mode" default
    to "affine" (matching mx.nn.Linear.to_quantized).

  - Root.Open (refactored into openFromManifest for testability) merges
    config.json overrides over blob metadata. Config-specified paths win
    because their blob __metadata__ is known to be wrong for them; paths
    not overridden still come from blob metadata. Ollama-published models
    ship no quantization block, so their behaviour is unchanged.

Includes a regression guard (TestRoot_OpenPopulatesFromConfigAndBlobs)
plus coverage for the new TensorQuantInfo field semantics and the
config.json parser (8 cases: missing config, no quant block, flat
config, explicit-mode override, omitted-mode override, alias key,
multiple overrides, malformed JSON).
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0b94171a2f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +76 to +77
if defaults.Mode != "" {
defaults.QuantType = strings.ToUpper(defaults.Mode)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Derive global quant type from bits+mode, not mode alone

Setting defaults.QuantType from defaults.Mode alone collapses affine Q4 and Q8 into the same value (AFFINE). Downstream code derives runtime defaults via QuantizationParams(root.QuantType()), and QuantizationParams("AFFINE") falls back to (32, 8, "affine"), so a config with global {"mode":"affine","bits":4,"group_size":64} loses its 4-bit intent. That makes tensors without per-tensor metadata (notably imports that only expose plural aux names) resolve to the wrong dequantization params and can produce incorrect outputs or shape mismatches.

Useful? React with 👍 / 👎.

@jodagreyhame
Copy link
Copy Markdown
Author

Closed by accident — same fork-cleanup story as #15743. Continued at #15760. Same branch (now restored), with the Codex P1 review on config_quant.go:77 (derive QuantType from mode+bits, not mode alone, so affine Q4 doesn't collapse to "AFFINE" and lose bit info) addressed in the new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant