Quantized _out functions don't follow same conventions as other out functions in the codebase #36508
Labels
better-engineering
Relatively self-contained tasks for better engineering contributors
low priority
We're unlikely to get around to doing this in the near future
oncall: quantization
Quantization support in PyTorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Currently they're defined like:
However, a standard out function looks like this:
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @dzhulgakov @kevinbchen (who added the alias analysis annotations to these functions) and @z-a-f (who appears to have originally added these out variants at #23971 )
The text was updated successfully, but these errors were encountered: