Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[pytorch][ao] force weight observer/fake_quant to be on the same devi…
…ce as the weight tensor (#106755) Summary: Pull Request resolved: #106755 As title. There's a corner case where both cpu and gpu are avaiable, although the model is moved to cpu, the newly created PTQ weight observer is still on gpu. Therefore, during the convert, this line will fail https://fburl.com/4rhipfvb Test Plan: CI Differential Revision: D48141494 fbshipit-source-id: 8736e84a6242e18edde862408f11c9d3f8c5b4d3
- Loading branch information