Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[jit] torch.isfinite is broken #29340

Closed
driazati opened this issue Nov 7, 2019 · 8 comments
Closed

[jit] torch.isfinite is broken #29340

driazati opened this issue Nov 7, 2019 · 8 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue

Comments

@driazati
Copy link
Contributor

driazati commented Nov 7, 2019

def fn(x):
    print(torch.isfinite(x))


s = torch.jit.script(fn)
fn(torch.randn(2, 2))
s(torch.randn(2, 2))

results in

tensor([[True, True],
        [True, True]])
graph(%x.1 : Tensor):
  %4 : None = prim::Constant() # ../test.py:14:0
  %2 : float = prim::ImplicitTensorToNum(%x.1) # ../test.py:15:10
  %3 : bool = aten::isfinite(%2) # ../test.py:15:10
   = prim::Print(%3) # ../test.py:15:4
  return (%4)                                                                                                                                   
Traceback (most recent call last):
  File "../test.py", line 21, in <module>
    s(torch.randn(2, 2))
RuntimeError: Cannot input a tensor of dimension other than 0 as a scalar argument
The above operation failed in interpreter, with the following stack trace:
at ../test.py:15:10
def fn(x):
    print(torch.isfinite(x))
          ~~~~~~~~~~~~~~ <--- HERE

It crashes at runtime and the signature is wrong since the actual op is not bound in, it's instead done here

DEFINE_UNARY_FLOAT_OP(aten::isfinite, std::isfinite(a), bool),



cc @suo
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 7, 2019
@eellison
Copy link
Contributor

eellison commented Nov 7, 2019

This is an example of a general source of bugs in TorchScript, which is that if you register an operator that should be overloaded with both Scalar and Tensor for an argument for only the Scalar case it will lead to bad implicit casting and a runtime error. The fix is to just expose the op that has a Tensor input.

See also: #27512

#20664

@zetyquickly
Copy link
Contributor

This is an example of a general source of bugs in TorchScript, which is that if you register an operator that should be overloaded with both Scalar and Tensor for an argument for only the Scalar case it will lead to bad implicit casting and a runtime error. The fix is to just expose the op that has a Tensor input.

See also: #27512

#20664

Thanks a lot for your reply,
Does expose means to copypaste code of function or that means to specify input types to a function call?

@eellison
Copy link
Contributor

eellison commented Nov 7, 2019

Hmm actually, looks like we're not correctly resolving torch.isfinite to torch.functional.isfinite.

A workaround is to call torch.functional.isfinite(x)

@zetyquickly
Copy link
Contributor

zetyquickly commented Nov 7, 2019

Hmm actually, looks like we're not correctly resolving torch.isfinite to torch.functional.isfinite.

A workaround is to call torch.functional.isfinite(x)

I've tried, it won't work. F.isfinite(x) is returning 'bool' while scripting and Tensor in Python

Variable 'bool_tensor' is annotated with type Tensor but is being assigned to a value of type bool:
at box_regression.py:84:8
        """
            deltas (Tensor): some tensor
        """
        bool_tensor : torch.Tensor = torch.functional.isfinite(deltas)
        ~~~~~~~~~~~ <--- HERE

@eellison
Copy link
Contributor

eellison commented Nov 8, 2019

Hmm, actually you're right, all of the functions in torch/functional.py aren't working in JIT unless they are a 1-1 mapping with the C++ functions. Also those only defined in torch/tensor.py don't work...

@zetyquickly
Copy link
Contributor

Hmm, actually you're right, all of the functions in torch/functional.py aren't working in JIT unless they are a 1-1 mapping with the C++ functions. Also those only defined in torch/tensor.py don't work...

How can we solve it, what type of changes could we propose to repository?

@eellison
Copy link
Contributor

FYI: #28918

@eellison
Copy link
Contributor

This should be fixed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

No branches or pull requests

4 participants