Skip to content

Conversation

@jackzhxng
Copy link
Contributor

@jackzhxng jackzhxng commented Nov 6, 2024

Summary

Option to only save the torch.export()ed model and skip the to_edge and to_executorch passes.

Test plan

python -m examples.models.llama.export_llama --checkpoint /tmp/Llama-3.2-1B-Instruct/original/consolidated.00.pth  --params /tmp/Llama-3.2-1B-Instruct/original/params.json  --metadata '{"append_eos_to_prompt": 0, "get_bos_id":128000, "get_eos_ids":[128009, 128001], "get_n_bos": 0, "get_n_eos": 0}' --output_name="llama3_2_1B.pt2" --export_only

@jackzhxng jackzhxng added the release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava label Nov 6, 2024
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 6, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6695

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit a6a7162 with merge base 8f9fb7e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 6, 2024
@jackzhxng jackzhxng requested a review from cccclai November 7, 2024 01:02
Copy link
Contributor

@cccclai cccclai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we check if we want the module or the exported_module? Asking because I think QAT is working on the module not exported_module, we can double check with @navsud

@navsud
Copy link
Contributor

navsud commented Nov 11, 2024

Should we check if we want the module or the exported_module? Asking because I think QAT is working on the module not exported_module, we can double check with @navsud

QAT works on torch.export.export(...).module().

@jackzhxng
Copy link
Contributor Author

Not sure if you are referring to whether the torch.export()ed model vs. the eager model is needed or the exported_program vs. exported_program.module() needs to be exported - if the latter then you can only save an ExportedProgram, after that when you load you can call .module() on it

@jackzhxng jackzhxng merged commit 2f6d64f into main Nov 12, 2024
39 checks passed
@jackzhxng jackzhxng deleted the jz/export-only branch November 12, 2024 16:48
@kimishpatel
Copy link
Contributor

are we not importing this stuff to phabricator and make sure it is not breaking anythng?

@jackzhxng
Copy link
Contributor Author

jackzhxng commented Nov 13, 2024

@kimishpatel the way difftrain works on these is that a diff is created in phabricator post-merge, if there are internal tests that break then they are forward-fixed in a separate diff and exported, or they are reverted

@kimishpatel
Copy link
Contributor

But that means diff train cannot land, right? if thats the case, thats ok. I just think that we should not land breaking changes internally

@jackzhxng
Copy link
Contributor Author

Yup, so the if internal tests break then the diff train cannot land until it is reverted / a forward fix diff is stacked on top of it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: examples Changes to any of our example LLMs integrations, such as Llama3 and Llava

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants