Quantized model using boolean_dispatch
not picklable
#60210
Labels
low priority
We're unlikely to get around to doing this in the near future
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
馃悰 Bug
Follow-up bug from #57352. Model quantized using
prepare_fx
is not picklable due to thisboolean_dispatch
functionTo Reproduce
Same steps. Use a recent pytorch nightly.
Version:
Expected behavior
Empty output; pickle.dumps should succeed
Environment
Additional context
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
The text was updated successfully, but these errors were encountered: