You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To improve UX and make the code less error-prone, allow users to provide different dtypes in binary arithmetic ops (add/sub/mul/div/...) and matmul, just like in numpy.
The dtype of the result is upcasted i.e. matMul(float32, int32) => int32
The text was updated successfully, but these errors were encountered:
Allow users to provide different dtypes in binary arithmetic ops (add/sub/mul/div/...) and matmul, just like in numpy.
The dtype of the result is upcasted i.e. matMul(float32, int32) => float32
This will result in release patch 0.14.1, which will fix the breakage in 0.14.0 caused by #1408 due to improved dtype inference where tensor(new Int32Array()) is inferred to be int32, and was float32.
Fixestensorflow/tfjs#934, tensorflow/tfjs#966
To improve UX and make the code less error-prone, allow users to provide different dtypes in binary arithmetic ops (add/sub/mul/div/...) and matmul, just like in numpy.
The dtype of the result is upcasted i.e. matMul(float32, int32) => int32
The text was updated successfully, but these errors were encountered: