Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

input and filter must have the same depth: 4 vs 3 #5

Closed
TD-101 opened this issue Dec 20, 2016 · 16 comments
Closed

input and filter must have the same depth: 4 vs 3 #5

TD-101 opened this issue Dec 20, 2016 · 16 comments

Comments

@TD-101
Copy link

TD-101 commented Dec 20, 2016

Hi,

I get this error / exception, and while it is being, handled, the same error/exception occurs.
I am fairly new to this so it could be a multitude of problems, from the way I have set up python, tf etc, to improper hardware, but I thought I would put it up here in case someone had an easy fix!

Traceback (most recent call last):
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 965, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 947, in _run_fn
status, run_metadata)
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 66, in exit
next(self.gen)
File "/usr/local/lib/python3.5/site-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: input and filter must have the same depth: 4 vs 3
[[Node: import/conv2d0_pre_relu/conv = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](ExpandDims, import/conv2d0_w)]]

@ghost
Copy link

ghost commented Jan 24, 2017

@tomdawson91 have you solved it? I had the same problem.

@gonzalolc
Copy link

Same issue here! Have anyone solved it?

@ghost
Copy link

ghost commented Apr 18, 2017

img0 = np.float32(img0)[:,:,:3]

@gonzalolc
Copy link

It works! thanks @dattranx

@TD-101
Copy link
Author

TD-101 commented Apr 30, 2017

@dattranx Thanks - as I understand it, this just cuts out the alpha channel

@TD-101 TD-101 closed this as completed Apr 30, 2017
@R-Miner
Copy link

R-Miner commented May 1, 2018

I get an error like: Status(StatusCode=InvalidArgument, Detail="input and filter must have the same depth: 1 vs 3
[[Node: conv2d_1/convolution = Conv2D[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_1_0_0, conv2d_1/kernel/read)]]")'

Any thoughts on this? for me it is 1 Vs 3

@CyLouisKoo
Copy link

CyLouisKoo commented Aug 1, 2018

I have also met the problem and How you resolved it then? Thank you for your reply. @TD-101
image

@vishalvanpariya
Copy link

@gonzalolc please explain in deep i cannot understood

@khanfarhan10
Copy link

I am coding Grad-CAM with keras vis
I tried
seed_input =tf.convert_to_tensor(img[:,:,:3])
seed_input=np.float32(img)[:,:,:3]

But none of them worked for me
I get the following error

InvalidArgumentError Traceback (most recent call last)
in ()
20 penultimate_layer_idx = penultimate_layer_idx,#None,
21 backprop_modifier = None,
---> 22 grad_modifier = None)

8 frames
/usr/local/lib/python3.6/dist-packages/vis/visualization/saliency.py in visualize_cam(model, layer_idx, filter_indices, seed_input, penultimate_layer_idx, backprop_modifier, grad_modifier)
237 (ActivationMaximization(model.layers[layer_idx], filter_indices), -1)
238 ]
--> 239 return visualize_cam_with_losses(model.input, losses, seed_input, penultimate_layer, grad_modifier)

/usr/local/lib/python3.6/dist-packages/vis/visualization/saliency.py in visualize_cam_with_losses(input_tensor, losses, seed_input, penultimate_layer, grad_modifier)
158 penultimate_output = penultimate_layer.output
159 opt = Optimizer(input_tensor, losses, wrt_tensor=penultimate_output, norm_grads=False)
--> 160 _, grads, penultimate_output_value = opt.minimize(seed_input, max_iter=1, grad_modifier=grad_modifier, verbose=False)
161
162 # For numerical stability. Very small grad values along with small penultimate_output_value can cause

/usr/local/lib/python3.6/dist-packages/vis/optimizer.py in minimize(self, seed_input, max_iter, input_modifiers, grad_modifier, callbacks, verbose)
141
142 # 0 learning phase for 'test'
--> 143 computed_values = self.compute_fn([seed_input, 0])
144 losses = computed_values[:len(self.loss_names)]
145 named_losses = zip(self.loss_names, losses)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py in call(self, inputs)
3790 value = math_ops.cast(value, tensor.dtype)
3791 converted_inputs.append(value)
-> 3792 outputs = self._graph_fn(*converted_inputs)
3793
3794 # EagerTensor.numpy() will often make a copy to ensure memory safety.

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in call(self, *args, **kwargs)
1603 TypeError: For invalid positional/keyword argument combinations.
1604 """
-> 1605 return self._call_impl(args, kwargs)
1606
1607 def _call_impl(self, args, kwargs, cancellation_manager=None):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_impl(self, args, kwargs, cancellation_manager)
1643 raise TypeError("Keyword arguments {} unknown. Expected {}.".format(
1644 list(kwargs.keys()), list(self._arg_keywords)))
-> 1645 return self._call_flat(args, self.captured_inputs, cancellation_manager)
1646
1647 def _filtered_call(self, args, kwargs):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1744 # No tape is watching; skip to running the function.
1745 return self._build_call_outputs(self._inference_function.call(
-> 1746 ctx, args, cancellation_manager=cancellation_manager))
1747 forward_backward = self._select_forward_and_backward_functions(
1748 args,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
596 inputs=args,
597 attrs=attrs,
--> 598 ctx=ctx)
599 else:
600 outputs = execute.execute_with_cancellation(

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:

InvalidArgumentError: input depth must be evenly divisible by filter depth: 443 vs 3
[[node conv2d_1_1/convolution (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_10171]

Function call stack:
keras_scratch_graph

@asr-aditya
Copy link

The error is because of mismatch in dimensions of the input provided. The model requires a depth of '3' for the input but is given '4'.

@AnjanaChankya
Copy link

whats depth of 3 means

@asr-aditya
Copy link

whats depth of 3 means

It means if you are giving an image, it has 3 channels i.e. size of image is (256,256,3)

@LucasColas
Copy link

Yes you have to change the depth.

@Umraz-Hussain-MyWorld
Copy link

Screenshot from 2021-06-16 21-40-13

@Umraz-Hussain-MyWorld
Copy link

some one please help

@Zapbbx
Copy link

Zapbbx commented Feb 2, 2022

I'm not an expert, but it seems that the 2 source images I'm using don't have the same number of channels. IE, I'm using PNG files and one set of them has a transparent background (Alpha 0) - Wheras the other set of images has colored backgrounds..
Saving both sets of files as JPG images, works around the error. I think because this gets rid of the transparent (alpha) channel. IE, the "shape" of your data needs to be the same.. If you feed a grayscale image into CNN which expects a color image. Find shape of input, e.g. print(model.input.shape) in Keras, you get (None, 224, 224, 3) and your input blob must have a corresponding shape, so having a grayscale image you have to convert it into a color image all three channels need to be the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests