New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connecting to invalid output 163 of source node GRU_1/while which has 163 outputs.. #39908
Comments
I tried in colab with TF 2.1.0 and i am not seeing any issue.However i am able to reproduce the issue with TF 2.2.0, nightly version (2.3.0-dev20200527).Please, find the gist here.Thanks! |
I was talking about this. the bottom line is that in the new "stable" version 2.2.0, the code does not work. This is either a bug or some kind of change that I do not understand I want to use new version 2.2.0 in all my work's codes, but this one don't want to work. So i reported this as a bug |
Thanks for reporting the issue. I think there are several issues we need to address here:
|
Btw, I think the issue is probably related to #38906, which we have same finding for tensorflow.python.framework.errors_impl.AlreadyExistsError. Btw, disable_eager_execution() will probably cause some side effects in our code base, since it will fallback to some legacy behavior, which might not be recommended for current user. Do you really need eager mode turned off? or you are just trying that to see if it can walk around the issue? |
|
No, I don't need eager mode off, I just have tried different setups for my code to work, but unsuccessful |
Ok. I guess this might be regression since we refactor the training logic a bit between 2.1 and 2.2. Currently the code is expecting each output of the model should have a matching label. |
Now I removed this outputs, and eager mode is enabled, but:
Code on colab: |
The model.fit is failing the same way as #38906, and we will send a fix very soon. |
Btw 80a9367 should fix the issue for training. Let me verify it when we have a new nightly PIP. |
I tested you colab with latest nightly, and it is working now. Closing this issue. |
@qlzh727 I still report an error when Using tf-nightly (2.4.0-dev20200722) |
Do u have the colab to repro the issue? |
@qlzh727 here https://colab.research.google.com/drive/1XuQWSLa41BFcHlAD2S25cKa6OdxNEuAf?usp=sharing |
It seems that your code disable the eager execution, and I think the code will work if the eager is enabled. Is there any reason that you disable the eager execution? |
@qlzh727 It's a long story.I want to use attention model to extract attention score.But I can't find any TF2 API to use.So I consider train model in TF2 mode and save model in TF1 mode.I am searching for a long time on net. But no use.I repro the issue on colab. If you have good suggestions, please give me some advice. Thank you. https://colab.research.google.com/drive/1hq4WWM481pcKH8JoO43gFciZrFeKe-3_?usp=sharing edit:I've found a solution to save the model in TF2 mode .But still cause an error when disable the eager execution. |
Why disable the eager execution cause an error ? Can the issue be fixed ? |
System information
Describe the current behavior
My code works great on 2.1.1 but not works at 2.2.0. (Error log №1 below)
Empirically found that the problem appears if a dropout or recurrent_dropout is used in GRU layers.
Tried to change GRU to LSTM also, - same problem.
I tried to use tf.compat.v1.experimental.output_all_intermediates() True and False - has no effect.
At 2.2.0 it works ONLY if I remove dropout and reccurent_dropout options from GRU layers AND disable eager_execution with tf.compat.v1.disable_eager_execution() command.
But if I remove dropouts and eager is enabled - I have another error (Error log №2 below)
Standalone code to reproduce the issue
Test case with this problem:
https://colab.research.google.com/drive/1HUayaLsHNZ30JaBlxvLyQz7Evf1FnsD5?usp=sharing
Other info / logs
Error log №1:
Error log №2:
The text was updated successfully, but these errors were encountered: