New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.cast
does not preserve requested precision for Python types of float64/int64/complex128 etc
#64548
Comments
Hi @jonas-eschle , Thank you! |
Hi @jonas-eschle , I have tested the given code snippet with TF2.15v. For 1st case(casting of float64 to float32 and checking both same or not) the assertion fails with
For 2nd case (casting of float64 value to float64 and checking both same or not) the assertion fails but it should not fail IMO. Attached gist for reference. |
True, the equal fails. The equal was meant to show that the numbers are actually the same, I adjusted the script to include the conversion for the equal to work (note that the |
maybe, just to be clear @SuryanarayanaY , this isn't a 2.15 only issue, it appears in all versions up to the nigthlies and probably has existed for years |
Hi @jonas-eschle , I have checked the documentation and found below note regarding casting the python floats to higher precision can lead to loss of precision. This is expected behaviour which is also documented.
|
I see, this is true, although highly unexpected behavior IMHO. I do not quite understand why https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py?rgh-link-date=2024-03-26T22%3A03%3A17Z#L1019 doesn't just take the dtype argument though, the comment seems not clear to me. I think it's unexpected and should either a) fail if a Python type is given or b) converted to the tensor type first (the comment suggests that things could go wrong, but then, what does a user expect? It will obviously be converted to a tensor?), but doing things silently "wrong" seems not the ideal way? Especially since the conversion to a float32 is a bit random (i.e. why 32, could be 64, or 16?) Feel free to close or keep open for "fix", I would regard it as a design issue and there is a "fix" comment there, so why not do it right |
Hi @jonas-eschle , If you want to modify the comment in docs please feel free to create a PR. The source is here. |
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you. |
This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further. |
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
binary
TensorFlow version
any new version, up to develop
Custom code
No
Current behavior?
tf.cast
truncates the precision of non-TF (python, numpy,...) types where the requested conversion is higher than the default ofconvert_to_tensor
, i.e.float64
,int64
,complex128
.Example:
Casting a numpy array with float64 (or a python float) with
tf.cast
returns a tf.Tensor float64 with effective float32 precision (see example below)The reason is in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1006:
if it's not a TF type, it converts it to the default type returned by
convert_to_tensor
, which is for floats (as an example) float32. Later on, it up-casts (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1019), but the precision was already lost at this point.proposed solution
The comment suggests that providing
dtype
could convert things that are inconvertible. However, isn't this happening now already? And wouldn't that fail later on in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1019 in the cast?I do not see any downside of using
dtype=dtype
.If you agree, I can make the change and see if the tests still pass.
Standalone code to reproduce the issue
The text was updated successfully, but these errors were encountered: