New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad overload order for zeros_like #19685
Comments
Why are there two overloads in the first place? We should strive for only one overload as this makes error messages much more consistent with other Python functions. |
That's true. In reality, all the TensorOptions arguments need to be optionals, and we should merge the TensorOptions passed with the input tensor's option as the fallback value. Right now the logic to do that is in the binding code ( Who is the right person to work on this @gchanan? I can help but I don't know how it fits into the ongoing schema unification thing |
removed
The "frontend" schema unification part is complete in that all function signatures in native match their corresponding signature in JIT. The backends are still super complicated, because they do things like splatting / reverse splatting TensorOptions for no good reason. I believe @VitalyFedyunin has been looking a bit into TensorOptions sanity, but I don't know exactly what he has planned in the short term. |
I'm on hold with the TensorOptions while @smessmer working on the codegen unification, As it might introduce more complexity into the code base. |
In the
pytorch_torch_functions.cpp
generated code, the zeros_like binding looks likeIn this case, the second overload will never get hit as all the arguments in the first overload have defaults. This is causing a bug where tracing will "constant-ify" device instead of correctly deriving the device at runtime from the input tensor (see #19637).
I tracked the overload sorting code to here but I don't understand it. My desired behavior is to swap the order of the overloads in cases like the above.
Can anyone help me out? cc. @gchanan, @cpuhrsch, @ezyang
The text was updated successfully, but these errors were encountered: