Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bidirectional LTSM example throws shape error #818

Closed
colinskow opened this issue Jun 29, 2017 · 17 comments
Closed

Bidirectional LTSM example throws shape error #818

colinskow opened this issue Jun 29, 2017 · 17 comments

Comments

@colinskow
Copy link
Contributor

Running the following file throws an error:
https://github.com/tflearn/tflearn/blob/master/examples/nlp/bidirectional_lstm.py

ValueError: Shape (128, ?) must have rank at least 3

Setup:

  • MacOS Sierra (10.12)
  • Python 2.7
  • Tensorflow v1.2.0
  • TFLearn v0.3.2
Traceback (most recent call last):
  File "bidirectional_lstm.py", line 47, in <module>
    net = bidirectional_rnn(net, BasicLSTMCell(128), BasicLSTMCell(128))
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 374, in bidirectional_rnn
    dtype=tf.float32)
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 375, in bidirectional_dynamic_rnn
    time_major=time_major, scope=fw_scope)
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn
    dtype=dtype)
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in _dynamic_rnn_loop
    for input_ in flat_input)
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in <genexpr>
    for input_ in flat_input)
  File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 649, in with_rank_at_least
    raise ValueError("Shape %s must have rank at least %d" % (self, rank))
ValueError: Shape (128, ?) must have rank at least 3
@willduan
Copy link
Contributor

Please try to install tensorflow 1.0 or 1.1. Since it is not very compatible with tensorflow 1.2 version currently.

@colinskow
Copy link
Contributor Author

I can confirm this does work fine with TensorFlow 1.1.0. That means there was a breaking change in tensorflow.python.ops.rnn.bidirectional_dynamic_rnn. I would like to get this working in the latest version since I am relying on new 1.2.0 features.

colinskow added a commit to colinskow/tflearn that referenced this issue Jul 6, 2017
@colinskow
Copy link
Contributor Author

colinskow commented Jul 6, 2017

I found the problem:
On TF <= 1.1.0 recurrent.py imports static_bidirectional_rnn which works.
On TF >= 1.2.0 recurrent.py imports bidirectional_dynamic_rnn which is not compatible with the current implementation.

Removing the import for bidirectional_dynamic_rnn solves it. PR inbound.

aymericdamien pushed a commit that referenced this issue Jul 10, 2017
* fix bidirectional_rnn working with TF 1.2 (resolves #818)

* fix bidirectional RNN, ensure backward compatibility
@stepsma
Copy link

stepsma commented Jan 11, 2018

Hi @colinskow

Could you please re-visit this problem one more time, I am currently still getting this error with tensorflow 1.4. Other people seems seeing this problem as well with more recent tensorflow version (988 issue). Thank you for your concern.

Best,

@FTAsr
Copy link

FTAsr commented Jan 17, 2018

I'm getting the same error using the following packages on a linux GPU machine, a conda environment with python 2.7 (same code gives no error on my MacBook with tensorflow cpu):

tensorflow 1.3.0 0
tensorflow-base 1.3.0 py27h0dbb4d0_1
tensorflow-gpu 1.3.0 0
tensorflow-gpu-base 1.3.0 py27cuda8.0cudnn6.0_1
tensorflow-tensorboard 0.1.5 py27_0
tflearn 0.3.2

I was initially using Keras together with the same Tensorflow-gpu package and got segmentation fault trying to add any type of recurrent layers, and then moved to using tflearn but again having problem with this type of network. Please fix the problem in all tensorflow distributions. I wonder why the GPU version is always one step behind the CPU version. Thank you!

@MrKZZ
Copy link

MrKZZ commented Jan 22, 2018

Oh, I have this problem too! Seems that this problem hasn't fixed so far.Neither tensorflow or tflearn team wants to do this job! But I wander to know what's the problem means?
ValueError: Shape (128, ?) must have rank at least 3

@fiorinin
Copy link

fiorinin commented Feb 1, 2018

Same problem here, some insight would be nice.

@thomas-young-2013
Copy link

same problem found! I am working with tensorflow_gpu 1.8.0, please revisit the issue again!

@maxiaomu
Copy link

maxiaomu commented Jul 2, 2018

Oh, i've found the same problem with tf.version
'1.6.0-rc0'.

@signalprime
Copy link

aye, here too... tensorflow 1.8.0

@g-laz77
Copy link

g-laz77 commented Oct 10, 2018

Same issue again. Please revisit this!

@sandove
Copy link

sandove commented Nov 5, 2018

same error... tensorflow 1.9.0

@dyq0811
Copy link

dyq0811 commented Nov 8, 2018

same error here!...

@zhvankh
Copy link

zhvankh commented Feb 17, 2019

d01a0b9#diff-5885c0e8432a40cf7a3e4b2448302e3a
this is best answer for this problem

@Vani261196
Copy link

same error here ....... tensorflow 1.14

@Syauri
Copy link

Syauri commented Apr 5, 2020

Getting the same error in Tensorflow 1.15! Please revisit!

@yunfanye
Copy link

yunfanye commented Jun 3, 2022

FWIW, I met this issue today too and it turned out it was a different root cause that led to the same error message. Given that there was a comment saying it was fixed in TF >= 1.2, I'd bet that the issue was indeed fixed and all the later errors were caused by other issues.

It was so frustrating initially to see all the comments here complaining a issue 5 years ago without answers, so I figured I'd shed some light.

The error message basically says: the code expects a 3-dimensional input, i.e. (128, ?, ?) instead of (128, ?). For my specific error, it was caused by an upstream bug where one dimension was gone, i.e. it should have been (128, 256, ?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests