-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.keras.layers.BatchNormalization() throws TypeError: Incompatible types: <dtype: 'resource'> vs. int64. Value is 0 #31894
Comments
I could reproduce the issue with Tensorflow 1.14.0 and tf-nightly. Here is the gist. |
@gadagashwini I am using 1.14.0, though if you don't apply TimeDistributed layer on BatchNormalisation, it will work. |
@iamnotahumanbecauseiamabot, Thanks for the update. |
@robieta I have updated the issue please see the update. |
I am seeing the same thing when using the MirroredStrategy. The model works fine when executing normally. We don't have TimeDistributed Layers. If you want another ticket I am happy to make one but I am probably not going to be able to generate a repro case as the model is quite large. |
@sseveran It would be good if you create another issue with MirroredStrategy. It's even better if you can provide a simple standalone code. Thanks! |
I'm getting the same error on tensorflow 1.15 when using batch normalization inside a custom RNNCell + RNN layer. The error only appears in the while loop of the rnn. It also seems like the problem was resolved in tensorflow version 2. Would be great if the fix could be ported back into version 1 if that is possible. (I tried adapting control_flow_ops.py accordingly, but ran into more errors that I don't understand) |
@ akloss Closing this issue as it was resolved in TF version 2. Thanks! |
Throws error::
UPDATE::
this works but if you set the trainable boolean to True, it throws the same error
The text was updated successfully, but these errors were encountered: