You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In pytorch 2.0.1 (conda pytorch py3.8_cuda11.8_cudnn8.7.0_0) when using vmap with torch.greater, I get the following warning:
UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::greater.Scalar. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
馃殌 The feature, motivation and pitch
In pytorch 2.0.1 (conda pytorch py3.8_cuda11.8_cudnn8.7.0_0) when using vmap with
torch.greater
, I get the following warning:UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::greater.Scalar. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)
MWE
Alternatives
The alternative is to experience a performance drop. (The computation does complete correctly.)
Additional context
Related: pytorch/functorch#1080
cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @srossross
The text was updated successfully, but these errors were encountered: