-
Notifications
You must be signed in to change notification settings - Fork 369
Add __str__ to FqnToConfig to make printing more readable #3323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary:
att, adds `__str__` method to `FqnToConfig` so that printing is more
legible.
For some config:
```python
config = FqnToConfig({
"model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
})
```
the output will be:
```
FqnToConfig({
'model.layers.fig.1.1':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.1.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.8.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})
```
also adds in a test so that you cannot specify both fqn_to_config and
module_fqn_to_config unless they are both equal.
Test Plan:
```
pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified
```
Reviewers:
Subscribers:
Tasks:
Tags:
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3323
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (4 Unrelated Failures)As of commit c6de0f1 with merge base 9e93ab1 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
jerryzh168
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we just update all ModuleFqnToConfig usages in published models: https://huggingface.co/pytorch, update all internal callsites, and deprecate ModuleFqnToConfig before next release?
| and self.fqn_to_config != self.module_fqn_to_config | ||
| ): | ||
| raise ValueError( | ||
| "`fqn_to_config` and `module_fqn_to_config` are both specified and are not equal!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also want to check, does torchao config support equality comparison?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, since all the configs are dataclasses, it's the same as creating a tuple of the member attributes.
I am not opposed but I think this technically doesn't follow our BC policy, since next release will be when I think we're supposed to have 1 release of warnings before deprecation? |
* Adds __str__ to FqnToConfig to make printing more readable
Summary:
att, adds `__str__` method to `FqnToConfig` so that printing is more
legible.
For some config:
```python
config = FqnToConfig({
"model.layers.fig.1.1": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.1.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
"model.layers.fig.8.3": Float8DynamicActivationFloat8WeightConfig(
granularity=PerRow(),
),
})
```
the output will be:
```
FqnToConfig({
'model.layers.fig.1.1':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.1.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
'model.layers.fig.8.3':
Float8DynamicActivationFloat8WeightConfig(activation_dtype=torch.float8_e4m3fn, weight_dtype=torch.float8_e4m3fn, granularity=[PerRow(dim=-1), PerRow(dim=-1)], mm_config=Float8MMConfig(emulate=False, use_fast_accum=True, pad_inner_dim=False), activation_value_lb=None, activation_value_ub=None, kernel_preference=<KernelPreference.AUTO: 'auto'>, set_inductor_config=True, version=2),
})
```
also adds in a test so that you cannot specify both fqn_to_config and
module_fqn_to_config unless they are both equal.
Test Plan:
```
pytest test/quantization/test_quant_api.py -k test_fqn_config_module_config_and_fqn_config_both_specified
```
Reviewers:
Subscribers:
Tasks:
Tags:
* fix ruff check
Summary:
att, adds
__str__method toFqnToConfigso that printing is more legible.For some config:
the output will be:
also adds in a test so that you cannot specify both fqn_to_config and module_fqn_to_config unless they are both equal.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: