-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bidirectional LTSM example throws shape error #818
Comments
Please try to install tensorflow 1.0 or 1.1. Since it is not very compatible with tensorflow 1.2 version currently. |
I can confirm this does work fine with TensorFlow 1.1.0. That means there was a breaking change in |
I found the problem: Removing the import for |
* fix bidirectional_rnn working with TF 1.2 (resolves #818) * fix bidirectional RNN, ensure backward compatibility
Hi @colinskow Could you please re-visit this problem one more time, I am currently still getting this error with tensorflow 1.4. Other people seems seeing this problem as well with more recent tensorflow version (988 issue). Thank you for your concern. Best, |
I'm getting the same error using the following packages on a linux GPU machine, a conda environment with python 2.7 (same code gives no error on my MacBook with tensorflow cpu): tensorflow 1.3.0 0 I was initially using Keras together with the same Tensorflow-gpu package and got segmentation fault trying to add any type of recurrent layers, and then moved to using tflearn but again having problem with this type of network. Please fix the problem in all tensorflow distributions. I wonder why the GPU version is always one step behind the CPU version. Thank you! |
Oh, I have this problem too! Seems that this problem hasn't fixed so far.Neither tensorflow or tflearn team wants to do this job! But I wander to know what's the problem means? |
Same problem here, some insight would be nice. |
same problem found! I am working with tensorflow_gpu 1.8.0, please revisit the issue again! |
Oh, i've found the same problem with tf.version |
aye, here too... tensorflow 1.8.0 |
Same issue again. Please revisit this! |
same error... tensorflow 1.9.0 |
same error here!... |
d01a0b9#diff-5885c0e8432a40cf7a3e4b2448302e3a |
same error here ....... tensorflow 1.14 |
Getting the same error in Tensorflow 1.15! Please revisit! |
FWIW, I met this issue today too and it turned out it was a different root cause that led to the same error message. Given that there was a comment saying it was fixed in TF >= 1.2, I'd bet that the issue was indeed fixed and all the later errors were caused by other issues. It was so frustrating initially to see all the comments here complaining a issue 5 years ago without answers, so I figured I'd shed some light. The error message basically says: the code expects a 3-dimensional input, i.e. (128, ?, ?) instead of (128, ?). For my specific error, it was caused by an upstream bug where one dimension was gone, i.e. it should have been (128, 256, ?). |
Running the following file throws an error:
https://github.com/tflearn/tflearn/blob/master/examples/nlp/bidirectional_lstm.py
ValueError: Shape (128, ?) must have rank at least 3
Setup:
The text was updated successfully, but these errors were encountered: