Keras fails to load saved model / properly infer dtypes in tf.math.maximum
#47161
Labels
comp:ops
OPs related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.9
Issues found in the TF 2.9 release (or RCs)
type:bug
Bug
System information
Describe the current behavior
Keras fails to load model due to problems with inferring data types in
tf.math.maximum
. In particular: take the input to the network to befloat32
, then cast the tensor tofloat64
and feed into maximum layer (with the second input to that layer beingfloat64
constant tensor) - creating this model is successful. Save that model (below I used SavedModel format, but .h5 case is similar) and try to load it - error is raised.Moreover, the error is raised only when the constant tensor is passed as first input to
tf.math.maxiumum
and does not occur if it is passed as the second input - see the Colab notebook I attached below.I believe this is strictly Keras-related, as I was able to successfully convert this failing-to-load model to to .tflite version (with proper options of converter set, using
from_saved_model
method) and this tflite model works and its' data types are correct.Describe the expected behavior
The model should load properly.
Standalone code to reproduce the issue
This code snipper should reproduce the issue:
BTW: When using
x = tf.maximum(x, a)
instead ofx = tf.maximum(a, x)
in the example above, the error is not raised!See also slightly more elaborate Colab notebook.
Other info / logs
The text was updated successfully, but these errors were encountered: