Skip to content

Conversation

@GregoryComer
Copy link
Member

Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16.

Differential Revision: D83623619

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 28, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15431

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 585fd19 with merge base 82e37df (image):

NEW FAILURE - The following job has failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 28, 2025
@meta-codesync
Copy link

meta-codesync bot commented Oct 28, 2025

@GregoryComer has exported this pull request. If you are a Meta employee, you can view the originating Diff in D83623619.

@GregoryComer GregoryComer added the release notes: xnnpack Changes to the XNNPack backend delegate label Oct 28, 2025
GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Dec 15, 2025
Summary:

Wire up the unary cosine operator in xnnpack for fp32 and fp16.

Differential Revision: D83623619
GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Dec 15, 2025
Summary:

Wire up the unary cosine operator in xnnpack for fp32 and fp16.

Differential Revision: D83623619
@meta-codesync
Copy link

meta-codesync bot commented Dec 15, 2025

@GregoryComer has imported this pull request. If you are a Meta employee, you can view this in D83623619.

input_id = vals_to_ids[get_input_node(node, 0)]

# output
output_id = vals_to_ids[node]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: assert dtype?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will take as a follow-up, if that's okay. This pattern is shared between many ops, so I might do a larger refactor.

target_name = "cos.default"

def supported_precision_types(self) -> List[ConfigPrecisionType]:
return [ConfigPrecisionType.FP32]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to add FP16 and later BF16 here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. The way the partitioner is currently written, FP32 implies FP16. I'll likely refactor this a little bit when we add BF16 support.

GregoryComer added a commit to GregoryComer/executorch that referenced this pull request Dec 16, 2025
Summary:
Wire up the unary cosine operator in xnnpack for fp32 and fp16.


Reviewed By: digantdesai

Differential Revision: D83623619

Pulled By: GregoryComer
Summary:
Wire up the unary cosine operator in xnnpack for fp32 and fp16.


Reviewed By: digantdesai

Differential Revision: D83623619

Pulled By: GregoryComer
@GregoryComer GregoryComer merged commit 97483f0 into pytorch:main Dec 16, 2025
142 of 145 checks passed
xingguo01 pushed a commit to xingguo01/executorch that referenced this pull request Dec 18, 2025
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16.

Differential Revision: D83623619
jirioc pushed a commit to nxp-upstream/executorch that referenced this pull request Dec 19, 2025
Summary: Wire up the unary cosine operator in xnnpack for fp32 and fp16.

Differential Revision: D83623619
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported release notes: xnnpack Changes to the XNNPack backend delegate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants