x/mlxrunner: apply config.json per-tensor quant overrides for mixed-precision MoE#15744
x/mlxrunner: apply config.json per-tensor quant overrides for mixed-precision MoE#15744jodagreyhame wants to merge 2 commits intoollama:mainfrom
Conversation
Accept the sibling-plural aux naming (<module>.scales / <module>.biases)
that mlx-lm, LM Studio, and any tool using mx.nn.quantize emit natively,
alongside Ollama's existing dot-child singular naming (<weight>.scale /
<weight>.bias). Without this change, safetensors imported via "ollama
create --experimental" that retain the plural form silently fall through
to dense *nn.Linear and load U32-packed weights as raw float matrices.
Changes are strictly about load-time name recognition:
- MakeLinearLayer / MakeEmbeddingLayer try the singular key first,
fall back to the plural sibling form.
- mainTensorNames filter excludes both singular and plural aux
suffixes so plural keys aren't misclassified as main tensors.
- normaliseAuxNames (extracted from loadTensorsFromManifest) remaps
both conventions to the canonical <base>.weight_scale /
<base>.weight_qbias form so downstream consumers see a single
naming convention regardless of the source blob.
Quant-parameter resolution and forward-pass code are unchanged.
Fix SparseMoE.Forward panic on mlx-lm mixed-precision NVFP4 MoE models
(e.g. Qwen 3.6 35B-A3B NVFP4) imported via "ollama create --experimental".
Root cause: mlx-lm stores per-path quantisation overrides in config.json's
"quantization" block, not in the tensor blob __metadata__. The runner only
read blob metadata, which ollama create fills from the config.json
*globals*. The MoE router gate (affine 8-bit, BF16 scales+biases,
group_size 64) was therefore fed to the NVFP4 dequant kernel at the global
group_size 16, producing a zero-shape output; Argpartition panicked.
Changes:
- TensorQuantInfo gains Bits and Mode fields. Zero values preserve the
previous QuantType-lookup behaviour.
- TensorQuantParams honours the new fields when non-zero.
- New readConfigQuantOverrides parses config.json's quantization or
quantization_config block. Per-path overrides omitting "mode" default
to "affine" (matching mx.nn.Linear.to_quantized).
- Root.Open (refactored into openFromManifest for testability) merges
config.json overrides over blob metadata. Config-specified paths win
because their blob __metadata__ is known to be wrong for them; paths
not overridden still come from blob metadata. Ollama-published models
ship no quantization block, so their behaviour is unchanged.
Includes a regression guard (TestRoot_OpenPopulatesFromConfigAndBlobs)
plus coverage for the new TensorQuantInfo field semantics and the
config.json parser (8 cases: missing config, no quant block, flat
config, explicit-mode override, omitted-mode override, alias key,
multiple overrides, malformed JSON).
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0b94171a2f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if defaults.Mode != "" { | ||
| defaults.QuantType = strings.ToUpper(defaults.Mode) |
There was a problem hiding this comment.
Derive global quant type from bits+mode, not mode alone
Setting defaults.QuantType from defaults.Mode alone collapses affine Q4 and Q8 into the same value (AFFINE). Downstream code derives runtime defaults via QuantizationParams(root.QuantType()), and QuantizationParams("AFFINE") falls back to (32, 8, "affine"), so a config with global {"mode":"affine","bits":4,"group_size":64} loses its 4-bit intent. That makes tensors without per-tensor metadata (notably imports that only expose plural aux names) resolve to the wrong dequantization params and can produce incorrect outputs or shape mismatches.
Useful? React with 👍 / 👎.
Summary
Fixes #15746 —
panic: runtime error: index out of range [0] with length 0inSparseMoE.Forwardwhen running mlx-lm mixed-precision NVFP4 MoE models imported viaollama create --experimental. Concretely: Qwen 3.6 35B-A3B NVFP4 crashes on the first token without this change.Root cause: mlx-lm stores per-path quantisation overrides in
config.json'squantizationblock, not in the tensor blob__metadata__. Ollama's MLX runner only reads blob metadata, whichollama createfills from the global quant params. The MoE router gate (stored as affine 8-bit, BF16 scales+biases, group_size 64) is therefore fed to the NVFP4 dequant kernel at the global group_size 16, producing a zero-shape output;Argpartitionon the zero-shape tensor panics.This PR makes the runner read
config.json's quant block, apply per-path overrides totensorQuant, and route each linear/embedding layer through the correct dequant kernel.Stacking and prior art
Depends on the naming-recognition PR (load-time plural aux acceptance). Reviewers who want to look at this PR in isolation can use the base branch; the CI run and the regression guard below both require the naming PR's changes to be present first.
Depends on: #15743
Extends #15409 ("mlx: mixed-precision quant and capability detection improvements"). That PR put the per-tensor-quant-metadata machinery in place on the assumption that overrides live in the blob
__metadata__. For mlx-lm-produced imports, overrides actually live inconfig.json'squantizationblock and are not round-tripped into blob__metadata__, so the override path introduced by #15409 is never populated for those models. This PR plugsconfig.jsonin as the second source.This PR does not fix #15632 (
qwen3.6:35b-a3b-nvfp4 fails to load: layer 0 missing linear attention projections) — that failure is on a different code path (attention-tensor loading, before the quant-metadata code runs).What changed
TensorQuantInfocarries explicit bits and modex/mlxrunner/model/root.goZero-value of the new fields preserves existing behaviour: callers that set only
QuantType/GroupSizeget the same result as before.Resolver honours the new fields
x/mlxrunner/model/quant.go— three lines added toTensorQuantParams. Whentq.Bits != 0, use it instead ofQuantizationParams(tq.QuantType).bits. Same fortq.Mode.New:
config.jsonquant override parserx/mlxrunner/model/config_quant.go(new)config.jsonviamanifest.ReadConfig. Returns zero values +nilerror on missing/malformed config (silent fallback)."quantization"and"quantization_config"as the top-level key — mlx-lm writes both, depending on version.<path>.weight.mode, use it; if omitted, use"affine". This matchesmlx.nn.Linear.to_quantized's default (verified by reading the MLX source).Root.Openmerges overrides over blob metadatax/mlxrunner/model/root.goThe blob scan populates
tensorQuantfrom__metadata__as before. Afterwards,readConfigQuantOverridesruns and the per-path entries overwrite blob entries for the paths config.json specifies.This direction is deliberate:
ollama createfills every blob's__metadata__from the config.json globals, not from per-path overrides. So for any path config.json overrides, the blob metadata entry is definitionally wrong. Paths not overridden by config.json still come from blob metadata (which is correct for those paths). Ollama-registry-published models don't ship aquantizationblock, so the override map is empty for them and their behaviour is untouched.Open(modelName)keeps its existing public signature; internal work moves toopenFromManifest(m)so tests can inject a fake manifest without touching the filesystem model store.A note on model vs architecture names
The repro model is Qwen 3.6 35B-A3B (the user-facing release name). Its Python/HF architecture class is still
Qwen3_5MoeForConditionalGeneration, inherited unchanged from Qwen 3.5 — which is why Ollama's source for it sits inx/models/qwen3_5/. The crash inSparseMoE.Forwardatx/models/qwen3_5/qwen3_5.go:~1320is the symptom; this PR fixes it upstream inx/mlxrunner/modelso nox/models/qwen3_5/code is touched.Tests
All new tests live in
x/mlxrunner/model/:In
config_quant_test.go(new file):TestReadConfigQuantOverrides_NoConfig— manifest withoutconfig.json→ zero values.TestReadConfigQuantOverrides_NoQuantizationBlock— config.json without quantization block → zero values.TestReadConfigQuantOverrides_FlatQuantization— globals populated, no per-path.TestReadConfigQuantOverrides_PerPathOverrideWithExplicitMode— explicitmoderespected.TestReadConfigQuantOverrides_PerPathOverrideOmittedModeIsAffine— the Qwen 3.6 path; override withoutmodecoerces to"affine".TestReadConfigQuantOverrides_QuantizationConfigAliasAccepted— both top-level keys recognised.TestReadConfigQuantOverrides_MultipleOverrides— several paths captured.TestReadConfigQuantOverrides_MalformedJSON— silent fallback.TestRoot_OpenPopulatesFromConfigAndBlobs— regression guard for the metadata/override path that caused the panic: fake manifest with a global NVFP4 default plus a per-path affine-g64-b8 gate override;root.TensorQuant("…mlp.gate.weight")returns the override, not the global. (End-to-end "model runs without panic" verification is the manual step in the test plan.)In
quant_test.go:TestTensorQuantParams_ExplicitBitsMode— new fields overrideQuantizationParamslookup.TestResolveLinearQuantParams_PerTensorOverridesGlobalViaBitsMode— end-to-end resolver with the new fields.TestResolveLinearQuantParams_InferenceSkippedWhenAffineFromTensorWithValidParams— guards against shape inference overriding a valid per-tensor entry.Test plan
go test ./x/mlxrunner/model/...on macOS arm64 — all green (targeted + regression).go test ./x/mlxrunner/model/...on Linux / CI — MLX cases skip, pure-Go cases pass.ollama runon a vanilla Ollama-registry-published NVFP4 model — no regression.ollama create --experimental+ollama runon an mlx-lm mixed-precision NVFP4 MoE model (Qwen 3.6 35B-A3B NVFP4 is a straightforward repro) — tokens generated without panic.Files touched
Risk
readConfigQuantOverridesreturns empty whenconfig.jsonhas noquantizationblock. Their behaviour is unchanged. Covered byTestReadConfigQuantOverrides_NoQuantizationBlock.mlx.QuantizedMatmul(mode="affine", gs=64, b=8)with BF16 scales+biases was verified against the MLX version linked in the build with a fabricated tensor of the exact on-disk shape; no C-side changes needed.Known follow-ups
Qwen3_5MoeForConditionalGenerationis still absent in the runner (images go throughmlx-vlmonly). Separate, larger work.