Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Error]:ValueError: Cannot feed value of shape (1, 3, 224, 224) for Tensor u'input_1_1:0', which has shape '(?, ?, ?, 3)' #90

Open
cwzat opened this issue Jan 7, 2018 · 4 comments

Comments

@cwzat
Copy link

cwzat commented Jan 7, 2018

I am doing it on my own network:

model = load_model('./model_single_frame_cnn/model_allframe_101_ori.h5')
name = model.layers[-1].name

layer_idx = utils.find_layer_idx(model, name)
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)

img1 = utils.load_img('test-a.jpg', target_size=(224, 224))
f, ax = plt.subplots(1, 2)

for i, img in enumerate([img1]):
    grads = visualize_saliency(model, layer_idx, filter_indices=0, seed_input=img)
    ax[i].imshow(grads, cmap='jet')

But, i get the error:

Traceback (most recent call last):
File
"/Users/qiu/Documents/cwzpaper/video_classification/video_classification/single_frame_cnn/visualization.py", line 43, in
grads = visualize_saliency(model, layer_idx, filter_indices=0, seed_input=img)
File "/anaconda/lib/python2.7/site-packages/vis/visualization/saliency.py", line 125, in visualize_saliency
return visualize_saliency_with_losses(model.input, losses, seed_input, grad_modifier)
File "/anaconda/lib/python2.7/site-packages/vis/visualization/saliency.py", line 73, in visualize_saliency_with_losses
grads = opt.minimize(seed_input=seed_input, max_iter=1, grad_modifier=grad_modifier, verbose=False)[1]
File "/anaconda/lib/python2.7/site-packages/vis/optimizer.py", line 143, in minimize
computed_values = self.compute_fn([seed_input, 0])
File "/anaconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2273, in call
**self.session_kwargs)
File "/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1100, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 3, 224, 224) for Tensor u'input_1_1:0', which has shape '(?, ?, ?, 3)'

My image format is channel_last. How can I solve it? Thank you!

@cwzat
Copy link
Author

cwzat commented Jan 7, 2018

I found _get_seed_input in optimizer.py:
('desired_shape', (1, None, None, 3)) and ('seed_input_shape', (1, 229, 229, 3)) , the code will be in if seed_input.shape != desired_shape: seed_input = np.moveaxis(seed_input, -1, 1)
could you solve the problem?

@cwzat
Copy link
Author

cwzat commented Jan 7, 2018

And, when i was using visualize_cam,i met the problem:
penultimate_layer.output is (None, None, None, 192) and the output_dims will be None.
I think they are similar problem, could you solve it?

@4OH4
Copy link

4OH4 commented Jan 21, 2018

I had similar problems when modifying the example code for Inception V3. My issue was solved by defining the input_shape to the network, so that the neural network layers were all of defined sizes:

from keras.applications import InceptionV3
model = InceptionV3(weights='imagenet', include_top=True, input_shape=(299, 299, 3))

and then changing the image load size to (299, 299):

img1 = utils.load_img('test-a.jpg', target_size=(299, 299))

@adrianmfi
Copy link

I had the same problem when using variable sized inputs.
The problem comes from vis.optimizers.Optimizer._get_seed_input(self, seed_input), in

if seed_input.shape != desired_shape:
    seed_input = np.moveaxis(seed_input, -1, 1)

The problem is that in Keras, variable shaped entries in the model are defined as None, which compared to an integer results in False.

For example the model input could have shape (None, None, 224, 224, 3) when using 224 x 224 RGB videos as input, while the input data has shape (1, 10, 224, 224,3) for a video of length 10. This will make seed_input.shape != desired_shape evaluate to True, which results in swapping the seed input channel axis with np.moveaxis(seed_input, -1, 1)

My quickfix was just to comment out the two lines in vis.optimizer

# if seed_input.shape != desired_shape:
    # seed_input = np.moveaxis(seed_input, -1, 1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants