-
Notifications
You must be signed in to change notification settings - Fork 694
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implicit float conversion in function calls #1908
Conversation
Implicit casts to float were disabled intentionally. For instance, |
The (expected) order here is:
This should lead to a sensible overload resolution. |
This can never happen with a custom Python type. Also, we do not really perform overload resolution prioritization like that. E.g. the |
0230e58
to
159e0dd
Compare
Can you please provide a test-case that you expect to fail with this PR? I'm fine with working on the overload resolution, but I think making |
I described the test case above.
See it pick the overload based on the weather. This is because we actually do not perform any overload resolution, but this behavior prevents user from forcing exact type match with codecs when the type on Python side has If somebody wants this behavior, they can register a codec for types implementing What we might want is for |
882ccb6
to
c4325dd
Compare
@filmor I think we still should not just enable |
That effectively boils down to just using the "real" |
Just to recap: This will now check before attempting a conversion whether the object identifies as a |
0b05129
to
8799f82
Compare
@lostmsu If this is mergeable for you, I think that would solve the reported issue sufficiently that we can continue with the releases without having to touch method resolution right now (which I also really don't want to). PyTorch's |
My problem with this approach is that it does not allow us to do: string Foo(float x) => "float32";
string Foo(double x) => "float64"; Foo(numpy.float32(42.0)) |
It does, you just need to be explicit. But I get your point, I just think that the reasonable default is more important than optimising this overload. We can document this particular issue. |
Thanks guys! |
I think we can do better than document, and actually special case ctypes, NumPy, PyTorch, and TensorFlow types that have exact precision. Preferably also leave a room for users to special case other precise types. It is not that hard, and if we don't do it now we'd have to wait until 4.0 to make another breaking change. |
I'm strongly against special-casing particular library types. If you want to allow proper |
8799f82
to
2194d6c
Compare
That's exactly what I want to do in a week or two. |
I'll merge this in the meantime. Feel free to revert or adjust the behaviour when you are attempting your refactoring. In this state, I'd be fine to publish it. |
What does this implement/fix? Explain your changes.
Uses
__float__
when available if a Python object is passed to a function with aDouble
orSingle
parameter.Does this close any currently open issues?
#1833
Any other comments?
This is not yet supported for integer types, which would use
__int__
or
__index__
(for Python >=3.10), as the respective logic is for somereason only implemented for
AsLongLong
andAsLong
, not for theunsigned counterparts and also not for the
AsSize_t
variants that weare using.
Checklist
Check all those that are applicable and complete.
AUTHORS
CHANGELOG