-
Notifications
You must be signed in to change notification settings - Fork 494
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable VBE support on CPU #3174
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This pull request was exported from Phabricator. Differential Revision: D63410944 |
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
555d40f
to
4fd9e29
Compare
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
4fd9e29
to
931fe2f
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
931fe2f
to
a60eb2e
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
a60eb2e
to
c56c9b4
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
c56c9b4
to
2b74f69
Compare
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
2b74f69
to
79cc1d8
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Differential Revision: D63410944
79cc1d8
to
b16c1e6
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
b16c1e6
to
2be8b61
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
8c4f8f5
to
b2c45b5
Compare
This pull request was exported from Phabricator. Differential Revision: D63410944 |
b2c45b5
to
a508722
Compare
Summary: Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
This pull request was exported from Phabricator. Differential Revision: D63410944 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D63410944 |
a508722
to
73b8cac
Compare
Summary: X-link: facebookresearch/FBGEMM#286 Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
Differential Revision: D63711688
This pull request was exported from Phabricator. Differential Revision: D63410944 |
Summary: X-link: facebookresearch/FBGEMM#286 Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
73b8cac
to
f2776d0
Compare
Summary: X-link: facebookresearch/FBGEMM#286 Pull Request resolved: pytorch#3174 Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py. To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}_embedding_codegen_lookup_{{ optimizer }}_function_pt2. This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (*do vbe here*) -> cpu kernel). the call is done Reviewed By: q10 Differential Revision: D63410944
This pull request was exported from Phabricator. Differential Revision: D63410944 |
f2776d0
to
4b88735
Compare
This pull request has been merged in f9de209. |
Summary:
Previous VBE on CPU was enabled in lookup_{{ optimizer }}.py.
To support MTIA ops, VBE should be done after torch.ops.fbgemm.{{ mdesc }}embedding_codegen_lookup{{ optimizer }}_function_pt2.
This diff follows the same implementation but enables it C++ so that it goes through the same PT2 pipeline (i.e., lookup -> VBE autograd -> cpu wrapper (do vbe here) -> cpu kernel). the call is done
Differential Revision: D63410944