-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hair segmentation issue in python #4266
Comments
Hi @Rameshv, Please let us know if the above resolves the issue. Thank you |
Thanks @kuaashish for the response But I still got the same issue. I have loaded |
Hi @kuaashish And here are the dependencies as per the notebook
and the code
its the same code I used hope this helps |
Hello @Rameshv Your suspicion is correct. If you check hair segmentation solution graph here, it needs number of channels as 4. That is why the error shows the actual shape it got is 1x512x512x4. It infers this from the tflite file. However, the ImageSegmenter API explicitly states the channels should be 1 or 3. And the batch can only be 1. Hence the error states the expected input was 1 x any height x any width x 3. If you check metadata for DeepLab v3 model, it also expects input to be 257x257x3. Hence it works. |
Thanks @ayushgdev for the clarification. So what do you think is better way to approach this? Is it all possible to do with ImageSegmenter API.?? any pointers to solve this would be really appreciated. I am new to python but I can start digging if I am shown a path.. thanks |
@Rameshv There is a possible solution for it as per analysis. Please allow us some more time to investigate on the issue. Apologies for the delay. |
Hello @Rameshv This issue has been fixed in MediaPipe v0.9.3.0. Please upgrade the package and it should start working. |
Thanks @ayushgdev |
I just tried the MediaPipe v0.9.3.0 and the error is gone and it can produce the hair segmentation. But the issue now is, its not producing the correct segmentation. heres the source image I tried And heres the hair segmentation result And the same image producing correct segmentation with selfie segmentaion As you can see both the selfie and deeplabv3 segmentaion produce correct results, but the hair segmentation is not. any ideas what could be the issue @ayushgdev ?? |
Same problem as above, bad mask on every image i try. |
Hello @Rameshv hair segmentation will only produce a mask for the hair. While in contrast, the selfie segmentation will produce mask for human body. Hence both are different applications and incomparable. |
Hi @ayushgdev I think you misunderstood me. What I said was the hair segmentation doesnt produce correct hair mask and not it should be same as self segmentation. if you can look at the hair mask produced, it is completely different from the source image. This is the expected hair segment Hope this helps. |
Hello @Rameshv # image = mp.Image.create_from_file(image_file_name) <----------- Comment this out
# read image using OpenCV
rgb_image = cv2.imread("person.jpeg")
rgba_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2RGBA)
# set alpha channel to empty.
rgba_image[:,:,3] = 0
# create MP Image object from numpy array
image = _Image(image_format=_ImageFormat.SRGBA, data=rgba_image) |
@ayushgdev |
_Image is essentially Image class. Check the imports below:
Other code remains the same as this/image segmentation colab example. |
Hello , thank you for your reply |
No problem. ImageFormat is imported as below
|
@ayushgdev thank you for me personally now it works! |
This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you. |
This issue was closed due to lack of activity after being marked stale for past 7 days. |
@Rameshv can you please provide me full code of hair segmentation |
I am trying to use this hair segmentation model https://storage.googleapis.com/mediapipe-assets/hair_segmentation.tflite in python and I was following the python guide from this https://developers.google.com/mediapipe/solutions/vision/image_segmenter/python
And using the same notebook mentioned in the link https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/image_segmentation/python/image_segmentation.ipynb
I am able to download the model with
!wget -O hair.tflite -q https://storage.googleapis.com/mediapipe-assets/hair_segmentation.tflite
but when trying to run the inference, using the below code
I am getting the error on line
vision.ImageSegmenter.create_from_options(options)
I suspect this is due to tensor size mismatch between the hair model and Vision image segmenter. But I could't find any references to that and stuck.
heres the complete stack trace
The text was updated successfully, but these errors were encountered: