Skip to content

Conversation

@chandanjain999
Copy link
Contributor

Summary:

Apply -02 compiler flag to make FLLM autocorrect faster

Sheet measuring different scopes of applying -02 and the resulting latency
https://docs.google.com/spreadsheets/d/1ZnxoRwCIiMz3hXyhgb6sm_2UThBUyq-mpp8RzksfJMQ/edit?usp=sharing

Latency

Reviewed By: BlakeLucchesi

Differential Revision: D85152814

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15583

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (5 Unrelated Failures)

As of commit 7def74c with merge base f4e1bd0 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-codesync
Copy link

meta-codesync bot commented Nov 4, 2025

@chandanjain999 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D85152814.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 4, 2025
@github-actions
Copy link

github-actions bot commented Nov 4, 2025

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Summary:
Pull Request resolved: pytorch#15583

Apply -02 compiler flag to make FLLM autocorrect faster

Sheet measuring different scopes of applying -02 and the resulting latency
https://docs.google.com/spreadsheets/d/1ZnxoRwCIiMz3hXyhgb6sm_2UThBUyq-mpp8RzksfJMQ/edit?usp=sharing

****Latency****

Reviewed By: BlakeLucchesi, lucylq

Differential Revision: D85152814
@meta-codesync meta-codesync bot merged commit f460594 into pytorch:main Nov 5, 2025
138 of 144 checks passed
abhinaykukkadapu pushed a commit to abhinaykukkadapu/executorch that referenced this pull request Nov 6, 2025
Differential Revision: D85152814

Pull Request resolved: pytorch#15583
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants