support LSTM for quantization aware training #42594
Labels
low priority
We're unlikely to get around to doing this in the near future
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
馃悰 Bug
LSTM network can not be evaluated after preparing for quantisation aware training. The same warning does not appear if evaluated before preparing.
To Reproduce
Steps to reproduce the behavior:
Error:
Expected behavior
Model to output a tensor for tensor feed to it.
Environment
Additional context
Can also produce error if these lines are used instead
qat_model.qconfig = torch.quantization.default_qconfig
qat_model = torch.quantization.prepare(qat_model)
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo
The text was updated successfully, but these errors were encountered: