Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hair segmentation issue in python #4266

Closed
Rameshv opened this issue Apr 10, 2023 · 24 comments
Closed

Hair segmentation issue in python #4266

Rameshv opened this issue Apr 10, 2023 · 24 comments
Assignees
Labels
legacy:hair segmentation Hair Segmentation related issues platform:python MediaPipe Python issues type:support General questions type:tflite TensorFlow Lite

Comments

@Rameshv
Copy link

Rameshv commented Apr 10, 2023

I am trying to use this hair segmentation model https://storage.googleapis.com/mediapipe-assets/hair_segmentation.tflite in python and I was following the python guide from this https://developers.google.com/mediapipe/solutions/vision/image_segmenter/python

And using the same notebook mentioned in the link https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/image_segmentation/python/image_segmentation.ipynb

I am able to download the model with

!wget -O hair.tflite -q https://storage.googleapis.com/mediapipe-assets/hair_segmentation.tflite

but when trying to run the inference, using the below code

base_options = BaseOptions(model_asset_path='hair.tflite')
options = vision.ImageSegmenterOptions(base_options=base_options,running_mode=VisionRunningMode.IMAGE,
                                              output_type=OutputType.CATEGORY_MASK)
 x = vision.ImageSegmenter.create_from_options(options)

I am getting the error on line vision.ImageSegmenter.create_from_options(options)

The input tensor should have dimensions 1 x height x width x 3. Got 1 x 512 x 512 x 4.

I suspect this is due to tensor size mismatch between the hair model and Vision image segmenter. But I could't find any references to that and stuck.

heres the complete stack trace

Screenshot 2023-04-10 at 9 57 27 AM

@Rameshv Rameshv added the type:bug Bug in the Source Code of MediaPipe Solution label Apr 10, 2023
@kuaashish kuaashish assigned kuaashish and unassigned ayushgdev Apr 10, 2023
@kuaashish
Copy link
Collaborator

Hi @Rameshv,
Thank you for raising a fresh issue, this is already known to us and we are working towards fix. As workaround, you can load the file manually and use model_asset_buffer: https://github.com/google/mediapipe/blob/master/mediapipe/tasks/python/core/base_options.py#L47

Please let us know if the above resolves the issue. Thank you

@kuaashish kuaashish added platform:python MediaPipe Python issues stat:awaiting response Waiting for user response and removed stat:awaiting response Waiting for user response labels Apr 10, 2023
@kuaashish
Copy link
Collaborator

kuaashish commented Apr 11, 2023

Hi @Rameshv,
Could you please let us know the status of the above if the issue has been resolved or further need help from our end. Thank you!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Apr 11, 2023
@Rameshv
Copy link
Author

Rameshv commented Apr 11, 2023

Thanks @kuaashish for the response

But I still got the same issue. I have loaded hair_segmentation.tflite and used it in model_asset_buffer, but its still the same issue

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Apr 11, 2023
@kuaashish
Copy link
Collaborator

@Rameshv,
Would you please let us know the information about the OS using, Python Version to reproduce the issue from our end? We suggest to fill the issue template to receive maximum relevant info to understand the issue better and share the possible solution. Thank you

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Apr 11, 2023
@Rameshv
Copy link
Author

Rameshv commented Apr 11, 2023

Hi @kuaashish
I am running this notebook https://colab.research.google.com/github/googlesamples/mediapipe/blob/main/examples/image_segmentation/python/image_segmentation.ipynb in google colab. so the system is Ubuntu 20.04 LTS and python version is 3.9.16

And here are the dependencies as per the notebook

!pip install -q flatbuffers==2.0.0
!pip install -q mediapipe==0.9.1

and the code

import numpy as np
import mediapipe as mp

from mediapipe.python._framework_bindings import image
from mediapipe.python._framework_bindings import image_frame
from mediapipe.tasks.python import vision
from mediapipe import tasks

BG_COLOR = (192, 192, 192) # gray
MASK_COLOR = (255, 255, 255) # white

OutputType = vision.ImageSegmenterOptions.OutputType
Activation = vision.ImageSegmenterOptions.Activation
VisionRunningMode = vision.RunningMode
BaseOptions =tasks.BaseOptions

IMAGE_FILENAMES = ['13147680-fd2e-4d67-b29a-89b9a6752703.jpeg']

in_file = open("hair.tflite", "rb")
data = in_file.read()

# Create the options that will be used for ImageSegmenter
base_options = BaseOptions(model_asset_buffer=data)
options = vision.ImageSegmenterOptions(base_options=base_options,running_mode=VisionRunningMode.IMAGE,
                                              output_type=OutputType.CATEGORY_MASK)

# Create the image segmenter
with vision.ImageSegmenter.create_from_options(options) as segmenter:

  # Loop through demo image(s)
  for image_file_name in IMAGE_FILENAMES:

    # Create the MediaPipe image file that will be segmented
    image = mp.Image.create_from_file(image_file_name)

    # Retrieve the masks for the segmented image
    category_masks = segmenter.segment(image)
    # print(category_masks)

    # Generate solid color images for showing the output segmentation mask.
    image_data = image.numpy_view()
    fg_image = np.zeros(image_data.shape, dtype=np.uint8)
    fg_image[:] = MASK_COLOR
    bg_image = np.zeros(image_data.shape, dtype=np.uint8)
    bg_image[:] = BG_COLOR

    for name in category_masks:
      condition = np.stack((name.numpy_view(),) * 3, axis=-1) > 0.2
      output_image = np.where(condition, fg_image, bg_image)

      print(f'Segmentation mask of {name}:')
      resize_and_show(output_image)

its the same code I used deeplabv3.tflite and selfie_segm_128_128_3.tflite. Both works fine. Only the hair_segmentation.tflite model throws the error. So I dont think this is related to a platform.

hope this helps

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Apr 11, 2023
@kuaashish kuaashish assigned ayushgdev and unassigned kuaashish Apr 12, 2023
@ayushgdev
Copy link
Collaborator

Hello @Rameshv Your suspicion is correct. If you check hair segmentation solution graph here, it needs number of channels as 4. That is why the error shows the actual shape it got is 1x512x512x4. It infers this from the tflite file.

However, the ImageSegmenter API explicitly states the channels should be 1 or 3. And the batch can only be 1. Hence the error states the expected input was 1 x any height x any width x 3. If you check metadata for DeepLab v3 model, it also expects input to be 257x257x3. Hence it works.

@ayushgdev ayushgdev added stat:awaiting response Waiting for user response type:tflite TensorFlow Lite legacy:hair segmentation Hair Segmentation related issues type:support General questions and removed type:bug Bug in the Source Code of MediaPipe Solution labels Apr 13, 2023
@Rameshv
Copy link
Author

Rameshv commented Apr 14, 2023

Thanks @ayushgdev for the clarification.

So what do you think is better way to approach this? Is it all possible to do with ImageSegmenter API.??

any pointers to solve this would be really appreciated. I am new to python but I can start digging if I am shown a path..

thanks

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Apr 14, 2023
@ayushgdev
Copy link
Collaborator

@Rameshv There is a possible solution for it as per analysis. Please allow us some more time to investigate on the issue. Apologies for the delay.

@ayushgdev
Copy link
Collaborator

Hello @Rameshv This issue has been fixed in MediaPipe v0.9.3.0. Please upgrade the package and it should start working.

@ayushgdev ayushgdev added the stat:awaiting response Waiting for user response label Apr 26, 2023
@Rameshv
Copy link
Author

Rameshv commented Apr 27, 2023

Thanks @ayushgdev
Will download and check in a bit

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Apr 27, 2023
@Rameshv
Copy link
Author

Rameshv commented Apr 28, 2023

I just tried the MediaPipe v0.9.3.0 and the error is gone and it can produce the hair segmentation. But the issue now is, its not producing the correct segmentation.

heres the source image I tried
13147680-fd2e-4d67-b29a-89b9a6752703

And heres the hair segmentation result
download (36)

And the same image producing correct segmentation with selfie segmentaion
download (37)

And with deeplabv3
download (38)

As you can see both the selfie and deeplabv3 segmentaion produce correct results, but the hair segmentation is not.

any ideas what could be the issue @ayushgdev ??

@WillReynolds5
Copy link

Same problem as above, bad mask on every image i try.

@ayushgdev
Copy link
Collaborator

Hello @Rameshv hair segmentation will only produce a mask for the hair. While in contrast, the selfie segmentation will produce mask for human body. Hence both are different applications and incomparable.

@ayushgdev ayushgdev added the stat:awaiting response Waiting for user response label May 2, 2023
@Rameshv
Copy link
Author

Rameshv commented May 2, 2023

Hi @ayushgdev I think you misunderstood me. What I said was the hair segmentation doesnt produce correct hair mask and not it should be same as self segmentation. if you can look at the hair mask produced, it is completely different from the source image.

This is the expected hair segment
download (49)

but this is what it produced
235066601-e4cea82c-5144-4d75-8f9e-ce0d1b16e0de

Hope this helps.

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label May 2, 2023
@ayushgdev
Copy link
Collaborator

ayushgdev commented May 4, 2023

Hello @Rameshv
Thanks for bringing this issue. Answer to this will help others as well. As conveyed earlier, hair segmentation requires 4 channels while the output of mp.Image.create_from_file(image_file_name) creates an image representation with 3 channels. Hence, the last channel is treated as alpha channel and based on that, the computation happens.
To resolve this issue, we can add an empty alpha channel to the image.
So you can simply use opencv to read the image, add alpha channel, and then create a MediaPipe Image object using mp.Image() constructor.

#  image = mp.Image.create_from_file(image_file_name)      <----------- Comment this out

# read image using OpenCV
rgb_image = cv2.imread("person.jpeg")
rgba_image = cv2.cvtColor(rgb_image, cv2.COLOR_BGR2RGBA)
# set alpha channel to empty.
rgba_image[:,:,3] = 0
# create MP Image object from numpy array
image = _Image(image_format=_ImageFormat.SRGBA, data=rgba_image)

The output produced is below:
image

@ayushgdev ayushgdev added the stat:awaiting response Waiting for user response label May 4, 2023
@mariomosko
Copy link

@ayushgdev
Is there any chance that you could show the entire implementation the _Image part is not very clear.
Thank you in advance

@ayushgdev
Copy link
Collaborator

ayushgdev commented May 4, 2023

_Image is essentially Image class. Check the imports below:

from mediapipe.python._framework_bindings import image as image_module
_Image = image_module.Image

Other code remains the same as this/image segmentation colab example.

@mariomosko
Copy link

Hello , thank you for your reply
,can you help with _ImageFormat as it is also not defined
thank you in advance!

@ayushgdev
Copy link
Collaborator

No problem. ImageFormat is imported as below

from mediapipe.python._framework_bindings import image_frame
_ImageFormat = image_frame.ImageFormat

@mariomosko
Copy link

@ayushgdev thank you

for me personally now it works!
thank you so much

@github-actions
Copy link

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale label May 12, 2023
@github-actions
Copy link

This issue was closed due to lack of activity after being marked stale for past 7 days.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

@ayushgdev ayushgdev removed stat:awaiting response Waiting for user response stale labels May 19, 2023
@Gauravi1
Copy link

Gauravi1 commented Apr 17, 2024

@Rameshv can you please provide me full code of hair segmentation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legacy:hair segmentation Hair Segmentation related issues platform:python MediaPipe Python issues type:support General questions type:tflite TensorFlow Lite
Projects
None yet
Development

No branches or pull requests

6 participants