Skip to content

Conversation

@jcaip
Copy link
Contributor

@jcaip jcaip commented Nov 11, 2025

Summary:

att, adds __str__ method to FqnToConfig so that printing is more legible.

For some config:

config = FqnToConfig({
    "model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
})

the output will be:

FqnToConfig({
  'model.layers.fig.1.1':
    Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
  'model.layers.fig.1.3':
    Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
  'model.layers.fig.8.3':
    Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})

also adds in a test so that you cannot specify both fqn_to_config and module_fqn_to_config unless they are both equal.

Test Plan:

pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified

Reviewers:

Subscribers:

Tasks:

Tags:

Summary:

att, adds `__str__` method to `FqnToConfig` so that printing is more
legible.

For some config:
```python
config = FqnToConfig({
    "model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
})
```

the output will be:
```
FqnToConfig({
    'model.layers.fig.1.1':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
    'model.layers.fig.1.3':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
    'model.layers.fig.8.3':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})
```

also adds in a test so that you cannot specify both fqn_to_config and
module_fqn_to_config unless they are both equal.

Test Plan:
```
pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified
```

Reviewers:

Subscribers:

Tasks:

Tags:
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 11, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3323

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (4 Unrelated Failures)

As of commit c6de0f1 with merge base 9e93ab1 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 11, 2025
@jcaip jcaip changed the title Adds __str__ to FqnToConfig to make printing more readable Add __str__ to FqnToConfig to make printing more readable Nov 11, 2025
@jcaip jcaip added the topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories) label Nov 11, 2025
@jcaip jcaip requested review from jerryzh168 and vkuzo November 11, 2025 16:26
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we just update all ModuleFqnToConfig usages in published models: https://huggingface.co/pytorch, update all internal callsites, and deprecate ModuleFqnToConfig before next release?

and self.fqn_to_config != self.module_fqn_to_config
):
raise ValueError(
"`fqn_to_config` and `module_fqn_to_config` are both specified and are not equal!"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also want to check, does torchao config support equality comparison?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, since all the configs are dataclasses, it's the same as creating a tuple of the member attributes.

@jcaip
Copy link
Contributor Author

jcaip commented Nov 11, 2025

should we just update all ModuleFqnToConfig usages in published models: https://huggingface.co/pytorch, update all internal callsites, and deprecate ModuleFqnToConfig before next release?

I am not opposed but I think this technically doesn't follow our BC policy, since next release will be when FqnToConfig is made available (it's not in 0.14).

I think we're supposed to have 1 release of warnings before deprecation?

@jcaip jcaip merged commit 43a1f46 into main Nov 12, 2025
14 of 19 checks passed
jainapurva pushed a commit that referenced this pull request Nov 13, 2025
* Adds __str__ to FqnToConfig to make printing more readable

Summary:

att, adds `__str__` method to `FqnToConfig` so that printing is more
legible.

For some config:
```python
config = FqnToConfig({
    "model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
    "model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
        granularity=PerRow(),
    ),
})
```

the output will be:
```
FqnToConfig({
    'model.layers.fig.1.1':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
    'model.layers.fig.1.3':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
    'model.layers.fig.8.3':
        Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})
```

also adds in a test so that you cannot specify both fqn_to_config and
module_fqn_to_config unless they are both equal.

Test Plan:
```
pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified
```

Reviewers:

Subscribers:

Tasks:

Tags:

* fix ruff check
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: improvement Use this tag if this PR is an improvement (doesn't fit into any of the other categories)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants