Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (3,2) and requested shape (2,2) #1435

Closed
davidb1 opened this issue Apr 14, 2019 · 12 comments

Comments

@davidb1
Copy link

davidb1 commented Apr 14, 2019

Getting this error on some images, any ideas?

coming from:
molded_images, image_metas, windows = self.mold_inputs(images)
ValueError: operands could not be broadcast together with remapped shapes [original->remapped]: (3,2) and requested shape (2,2)

@davidb1 davidb1 closed this as completed Apr 14, 2019
@davidb1
Copy link
Author

davidb1 commented Apr 14, 2019

Make sure the image isn't grayscale and convert to RGB it if it is (like in utils, load_image function)

@xiongshuai520
Copy link

thanks,it worked for me

@nooriahmed
Copy link

Make sure the image isn't grayscale and convert to RGB it if it is (like in utils, load_image function)

I am facing the same issue, would you please give a little brief description.i could not get your point. I am using grayscale images. I shall be grateful. regards

@davidb1
Copy link
Author

davidb1 commented Apr 23, 2019

I got this error when I was using (what I found out was) a grayscale image.
MaskRCNN doesn't work by default with grayscale and my model wasn't trained to work with grayscale so I needed to convert the image to RGB before inputing it for inference.

@nooriahmed
Copy link

I got this error when I was using (what I found out was) a grayscale image.
MaskRCNN doesn't work by default with grayscale and my model wasn't trained to work with grayscale so I needed to convert the image to RGB before inputing it for inference.

Thank you very much for the kind response

@buaacarzp
Copy link

hi, its my honer to meet the same question with you , but i have trouble in the test model, please help me , my qq is 510695983

@superabhijha2000
Copy link

operands could not be broadcast together with remapped shapes [original->remapped]: (3,2) and requested shape (2,2)

@superabhijha2000
Copy link

my code is working when i pass it by gray scale it give the same error ---- operands could not be broadcast together with remapped shapes [original->remapped]: (3,2) and requested shape (2,2)

@bharath5673
Copy link

grayscale 2 rgb worked for me... #custom_mrcnn

@Genius-farmer
Copy link

Genius-farmer commented Apr 13, 2021

Dear All,

I'm facing this error.

"ValueError: len(output_shape) cannot be smaller than the image dimensions"

is there any solution for this?

Please assist with this posting

https://github.com/matterport/Mask_RCNN/issues/2534

@pudari2007
Copy link

grayscale 2 rgb worked for me... #custom_mrcnn

Thank you it worked for me but we should use syntax like this eg.
backtorgb = cv2. cvtColor(gray,cv2. COLOR_GRAY2RGB)

@GT84
Copy link

GT84 commented Mar 24, 2022

Hy,

I know this is already closed for some time and there might be some forks or repos in here that are more up to date.
But for me this was the first and also the best implementation (I used) and so I am still using this repository as starting point.

I'm currently working on a project explicitly analysing grayscale images, so I did not want to change grayscale to RGB.
I tried to debug, until ending up keras.engine methods. As a result, some methods from keras.engine, that are used here, require 3 dims in shape. By default, some (or the most) grayscale images will appear with 2 dims.

So the (3,2) that does not map to (2,2) is simply a nested padding list (mrcnn.utils.py Lines 458, 479), hardcoded for a 3-dimensional image/array.
Rearranging this code will lead to other/further issues which finally appear inside of keras.engine.

Therefore, my solution is to just reshape every incomming image (not mask) without changing values and image size,
as it will happen as soon as we switch to RGB or any other colored format:

>>> image.shape
(512, 1024)
>>> image = image.reshape((*image.shape, 1))
>>> image.shape
(512, 1024, 1)

By default this should be attached in the load_image() method of your Dataset and also before you pass
any image into detect() of your MaskRCNN instance.

This worked for me without any problems. Maybe this helps someone who is still working with this.

Additionally:
IMAGE_CHANNEL_COUNT = 1 must be adopted in the Config.
Some adoptions in visualize.py if you want to use included plotting methods

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants