Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras/TF compatibility issues #3

Open
JackMedley opened this issue Dec 6, 2016 · 7 comments
Open

Keras/TF compatibility issues #3

JackMedley opened this issue Dec 6, 2016 · 7 comments

Comments

@JackMedley
Copy link

Hi Yarin,

Could I ask which version of TF/keras you used to run this? I am having problems running it with tensorflow=0.11.0 and keras 1.1.2. I see the following error

Exception: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 10496 arrays: [-0.64800038508682045, -0.64800038508682045, 1.1463771479409459, 1.1463771479409459, 1.1463771479409459, -0.48487515481156884, -0.48487515481156884, 0.058875612772602504, 0.058875612772602504, -0.4848...

Cheers,
Jack

@goodmansasha
Copy link

goodmansasha commented Dec 13, 2016

I've gotten this error also (Keras (1.1.2),Theano (0.8.2), tensorflow (0.10.0),numpy (1.11.2)). Tried both the Theano and Tensorflow model.fit calls.

Traceback (most recent call last):
File "sentiment_lstm_regression.py", line 84, in
callbacks=[modeltest_1, modeltest_2])
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 652, in fit
sample_weight=sample_weight)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1038, in fit
batch_size=batch_size)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 967, in _standardize_user_data
exception_prefix='model target')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 54, in standardize_input_data
'...')
Exception: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 10620 arrays: [-0.64800038508682045, -0.64800038508682045, 1.1463771479409459, 1.1463771479409459, 1.1463771479409459, -0.48487515481156884, -0.48487515481156884, 0.058875612772602504, 0.058875612772602504, -0.4848...

@yaringal
Copy link
Owner

yaringal commented Dec 14, 2016

Thanks for opening an issue. TF / Theano / Keras keep changing, and I don't have the resources to keep the repo up to date - it's mostly for demonstration purposes. The main code in the repo has been implemented into Keras, TensorFlow, and Torch - please have a look at these.

@goodmansasha
Copy link

Will do! Thank you for the feedback, it's more helpful than you think. I have a follow up question related to this and would appreciate being pointed in the right direction: What is the simplest way to measure the prediction uncertainty using an RNN and dropout? For the IMDB sentiment task, for example, I imagine making K number of forward passes in Keras, with the Dropout layer being active in a different way each time, and getting a large set of alternative predictions of the sentiment around the main prediction without dropout (then I would look at the quantiles of those alternate predictions, similar to bootstrapping). Is there a switch in Keras, Tensorflow or Torch to do the foreword passes with dropout engaged properly in each layer? And, are we supposed to also sample from the input data when measuring the uncertainty?

@goodmansasha
Copy link

I'm also curious if mxnet people are in the loop.

@yaringal
Copy link
Owner

yaringal commented Dec 18, 2016

making K number of forward passes in Keras, with the Dropout layer being active in a different way each time, and getting a large set of alternative predictions of the sentiment around the main prediction without dropout (then I would look at the quantiles of those alternate predictions, similar to bootstrapping)

yes, you want to use multiple forward passes, but you want to look at the sample mean rather than the output of the model with no dropout.

Is there a switch in Keras, Tensorflow or Torch to do the foreword passes with dropout engaged properly in each layer?

In Keras you can use K.Function(model.inputs + [K.learning_phase()], model.outputs) (or something like this) to compile a function that will use dropout at test time. In TF you can just do a forward pass in trainable mode.

are we supposed to also sample from the input data when measuring the uncertainty?

no

@goodmansasha
Copy link

thanks! I mentioned your implementation here: apache/mxnet#3930

@yaringal
Copy link
Owner

Re-opening for people to see the answers above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants