-
Notifications
You must be signed in to change notification settings - Fork 767
Support cosine operator on XNNPACK #15431
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15431
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 585fd19 with merge base 82e37df ( NEW FAILURE - The following job has failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@GregoryComer has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83623619. |
ecfa831 to
97fe98d
Compare
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Differential Revision: D83623619
97fe98d to
b1a698a
Compare
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Differential Revision: D83623619
b1a698a to
b10ba7c
Compare
|
@GregoryComer has imported this pull request. If you are a Meta employee, you can view this in D83623619. |
| input_id = vals_to_ids[get_input_node(node, 0)] | ||
|
|
||
| # output | ||
| output_id = vals_to_ids[node] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: assert dtype?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will take as a follow-up, if that's okay. This pattern is shared between many ops, so I might do a larger refactor.
| target_name = "cos.default" | ||
|
|
||
| def supported_precision_types(self) -> List[ConfigPrecisionType]: | ||
| return [ConfigPrecisionType.FP32] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to add FP16 and later BF16 here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. The way the partitioner is currently written, FP32 implies FP16. I'll likely refactor this a little bit when we add BF16 support.
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Reviewed By: digantdesai Differential Revision: D83623619 Pulled By: GregoryComer
b10ba7c to
7249141
Compare
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Reviewed By: digantdesai Differential Revision: D83623619 Pulled By: GregoryComer
7249141 to
585fd19
Compare
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Differential Revision: D83623619
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16. Differential Revision: D83623619
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16.
Differential Revision: D83623619