Merged
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3525
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit ece0c5e with merge base 428bbcf ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
drisspg
reviewed
Dec 22, 2025
| # since bias is not quantized | ||
| should_add_bias_separately = (scale_result is not None) and (bias is not None) | ||
| # | ||
| # (2) RuntimeError: Bias is not supported when out_dtype is set to Float32 |
Contributor
There was a problem hiding this comment.
Okay this is what I thought would happen
Contributor
Author
There was a problem hiding this comment.
Yeah but this only happens if per_tensor_scale=None (by default it is not), so users generally won't run into the _scaled_mm error. Either way this PR fixes that case
Contributor
There was a problem hiding this comment.
can you just create a note somewhere on the conversions / casting path for the paths?
drisspg
approved these changes
Dec 22, 2025
vkuzo
approved these changes
Dec 22, 2025
83c92e3 to
8191d3b
Compare
**Summary:** Today we hit this error with fp32 inputs + bias: ``` RuntimeError: Bias is not supported when module weight is in fp32 (out_dtype=Float32). Please use bfloat16 or float16 weights, or remove the bias from the linear layer. ``` This is thrown by `NVFP4DynamicActivationNVFP4WeightConfig` but it's trying to guard against this underlying `_scaled_mm` error: ``` RuntimeError: Bias is not supported when out_dtype is set to Float32 ``` This commit works around these errors by adding the bias separately in this case, similar to what float8 does. **Test Plan:** ``` pytest test/prototype/mx_formats/test_inference_workflow.py -k test_inference_workflow_nvfp4 ```
8191d3b to
ece0c5e
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary: Today we hit this error with fp32 inputs + bias:
This is thrown by
NVFP4DynamicActivationNVFP4WeightConfigbut it's trying to guard against this underlying_scaled_mmerror:This commit works around these errors by adding the bias separately in this case, similar to what float8 does.
Test Plan: