Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::greater.Scalar #107240

Closed
jonathan-macart opened this issue Aug 15, 2023 · 2 comments

Comments

@jonathan-macart
Copy link

jonathan-macart commented Aug 15, 2023

馃殌 The feature, motivation and pitch

In pytorch 2.0.1 (conda pytorch py3.8_cuda11.8_cudnn8.7.0_0) when using vmap with torch.greater, I get the following warning:

UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::greater.Scalar. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /opt/conda/conda-bld/pytorch_1682343998658/work/aten/src/ATen/functorch/BatchedFallback.cpp:82.)

MWE

import torch

def foo(x):
    return torch.greater(x, 1.0)

vmap_foo = torch.vmap(foo)
bar = torch.linspace(0, 10, 100)

assert torch.sum(vmap_foo(bar) ^ foo(bar)) == 0

Alternatives

The alternative is to experience a performance drop. (The computation does complete correctly.)

Additional context

Related: pytorch/functorch#1080

cc @zou3519 @Chillee @samdow @soumith @kshitij12345 @srossross

@zou3519 zou3519 transferred this issue from pytorch/functorch Aug 15, 2023
@kshitij12345
Copy link
Collaborator

This was fixed in #96744. If you try with nightly, you wouldn't see the warning there. Thanks!

@jonathan-macart
Copy link
Author

Awesome; I can confirm the warning does not occur in the nightly. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants