New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pytorch][ao] force weight observer/fake_quant to be on the same device as the weight tensor #106755
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106755
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 5 Pending, 1 Unrelated FailureAs of commit d40910b: UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D48141494 |
93bfecd
to
2e00d76
Compare
…ce as the weight tensor (pytorch#106755) Summary: Pull Request resolved: pytorch#106755 As title. There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb Test Plan: CI Differential Revision: D48141494 fbshipit-source-id: 8736e84a6242e18edde862408f11c9d3f8c5b4d3
This pull request was exported from Phabricator. Differential Revision: D48141494 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D48141494 |
…ce as the weight tensor (pytorch#106755) Summary: Pull Request resolved: pytorch#106755 As title. There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb Test Plan: CI Differential Revision: D48141494 fbshipit-source-id: 20fb08ff11d76654308645d6f5aec51b1636faf6
2e00d76
to
10e219d
Compare
…ce as the weight tensor (pytorch#106755) Summary: Pull Request resolved: pytorch#106755 As title. There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb Test Plan: CI Reviewed By: jerryzh168 Differential Revision: D48141494 fbshipit-source-id: 16e696f8bd001e51094be3bfca3ef0fd85d0c5c8
10e219d
to
d40910b
Compare
This pull request was exported from Phabricator. Differential Revision: D48141494 |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ce as the weight tensor (pytorch#106755) Summary: As title. There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb Test Plan: CI Differential Revision: D48141494 Pull Request resolved: pytorch#106755 Approved by: https://github.com/jerryzh168
Summary:
As title.
There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb
Test Plan: CI
Differential Revision: D48141494