-
Notifications
You must be signed in to change notification settings - Fork 684
[llama] Build the runner with tiktoken by default #4921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/4921
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 1 Cancelled JobAs of commit dedef9c with merge base 959bb1b ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@larryliu0820 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
3999038
to
fe6f4e9
Compare
9faf929
to
a231da6
Compare
@larryliu0820 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@larryliu0820 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
d45a87c
to
d34e569
Compare
This pull request was exported from Phabricator. Differential Revision: D61830302 |
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
d34e569
to
37976b0
Compare
This pull request was exported from Phabricator. Differential Revision: D61830302 |
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
37976b0
to
cd69d34
Compare
This pull request was exported from Phabricator. Differential Revision: D61830302 |
cd69d34
to
a0fc553
Compare
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
This pull request was exported from Phabricator. Differential Revision: D61830302 |
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
a0fc553
to
74e4bc6
Compare
This pull request was exported from Phabricator. Differential Revision: D61830302 |
Summary: As titled. We want to get rid of the preprocessor flag `EXECUTORCH_USE_TIKTOKEN` and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers. Test Plan: All CI jobs pass Reviewed By: JacobSzwejbka Differential Revision: D61830302 Pulled By: larryliu0820
74e4bc6
to
dedef9c
Compare
This pull request was exported from Phabricator. Differential Revision: D61830302 |
Summary: As titled. We want to get rid of the preprocessor flag
EXECUTORCH_USE_TIKTOKEN
and build tiktoken tokenizer by default. At model loading time we will try to read the artifact in one of the tokenizers.Test Plan: All CI jobs pass
Reviewers:
Subscribers:
Tasks:
Tags: