-
Notifications
You must be signed in to change notification settings - Fork 358
add bias handling for a_1_128_w_128_128 float8 scaling #3259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3259
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
||
| class ToyLinearModel(torch.nn.Module): | ||
| def __init__(self, in_features, out_features): | ||
| def __init__(self, in_features, out_features, bias): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: feels like bias is a bit confusing (since it can be a flag v.s. Tensor), even though it's used official in nn.Linear, maybe use has_bias as the other tests are doing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, let me do that in a future PR
Summary:
As titled, adds support for bias and a unit test
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: