-
Notifications
You must be signed in to change notification settings - Fork 706
torch.export()-only export Llama arg #6695
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6695
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a6a7162 with merge base 8f9fb7e ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we check if we want the module or the exported_module? Asking because I think QAT is working on the module not exported_module, we can double check with @navsud
QAT works on torch.export.export(...).module(). |
|
Not sure if you are referring to whether the |
|
are we not importing this stuff to phabricator and make sure it is not breaking anythng? |
|
@kimishpatel the way difftrain works on these is that a diff is created in phabricator post-merge, if there are internal tests that break then they are forward-fixed in a separate diff and exported, or they are reverted |
|
But that means diff train cannot land, right? if thats the case, thats ok. I just think that we should not land breaking changes internally |
|
Yup, so the if internal tests break then the diff train cannot land until it is reverted / a forward fix diff is stacked on top of it |
Summary
Option to only save the torch.export()ed model and skip the to_edge and to_executorch passes.
Test plan