Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image parsing #15

Closed
santosh9sanjeev opened this issue Aug 19, 2020 · 21 comments
Closed

Image parsing #15

santosh9sanjeev opened this issue Aug 19, 2020 · 21 comments

Comments

@santosh9sanjeev
Copy link

I am trying to give inputs from internet like take an image of a person and an image of the cloth..
I tried running the dataset_neck_skin_connection.py
How should I get the image-parse for the input image.. pls guide me.. I am a beginner

@minar09
Copy link
Owner

minar09 commented Aug 19, 2020

Hi @santosh9sanjeev , to run the model with custom internet images, make sure you have the following:

  1. image (image of a person)
  2. image-parse (you can generate with LIP_JPPNet or CIHP_PGN pretrained networks from the person image. Then you can run dataset_neck_skin_connection.py for LIP parsing. Finally run body_binary_masking.py)
  3. cloth (in-shop cloth image)
  4. cloth-mask (binary mask of cloth image, you can generate it with simple pillow/opencv function)
  5. pose (pose keypoints of the person, generate with openpose COCO 18 model )

Hope that helps. Thank you.

@santosh9sanjeev
Copy link
Author

Thank you @minar09 for your help
I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask..
Any idea how to proceed or to solve this error.. It would be very helpful..

@thaithanhtuan
Copy link
Collaborator

Thank you @minar09 for your help
I completed all the steps as you mentioned but I am facing problems with bit depth. The image-parse generated by CIHP_PGN is generating an image of bit depth 24 whereas the one needed is bit depth is 8. Using pillow I converted the bit depth from 24 to 8 but then the image mask generated had weird result... It actually did not correctly generate the mask..
Any idea how to proceed or to solve this error.. It would be very helpful..

It is not clear how weird result for the mask generated. Can you upload or share the code for cp_dataset or the result of CIHP_PGN. May be the file format is different from VITON dataset.

@santosh9sanjeev
Copy link
Author

person-final-1
person-final-2
Both the files are of png format, having dimension 192,256 and bit depth 8
When i ran the code for these images i got the following masked images
person-final-2
person-final-1

@minar09
Copy link
Owner

minar09 commented Aug 30, 2020

Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in cp_dataset.py as following code:

im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation
parse_array = np.array(im_parse) # convert to numpy array
parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape

@santosh9sanjeev
Copy link
Author

Thank you so much minar09 for your help...
I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder..
should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative?
Thank you once again for your help..

@minar09
Copy link
Owner

minar09 commented Aug 31, 2020

Thank you so much minar09 for your help...
I just have another small doubt actually the test.py code requires image-parse-new and image-mask folder..
should i change the code where there is image-mask and image-parse-new to image-parse, as I am not generating the image-mask and image-parse-new, or is ther any better alternative?
Thank you once again for your help..

Yes, just change the code if you don't need them. Only updating in cp_dataset.py should be enough.

@minar09
Copy link
Owner

minar09 commented Sep 13, 2020

Closing the issue as its resolved. Feel free to reopen in case there are still problems. Results with custom images: #23

@minar09 minar09 closed this as completed Sep 13, 2020
@Pritam-N
Copy link

Hi @santosh9sanjeev , please make sure to use grayscale ([0, 20]) files as input, not the RGB files from the CIHP_PGN generated segmentation. Also, I think you don't need to create binary shape masks directly before testing for CIHP_PGN (although there is no harm in that, creating binary masks script was made for VITON/LIP style segmentation), since CIHP_PGN produces better segmentation with torso-neck label. You can get the body shape for input in cp_dataset.py as following code:

im_parse = Image.open(osp.join(self.data_path, 'image-parse', parse_name)) # read segmentation
parse_array = np.array(im_parse) # convert to numpy array
parse_shape = (parse_array > 0).astype(np.float32) # get binary body shape

I am trying with segmentation generated from Graphonomy. Any suggestion on how to convert that to grayscale with 0-20 classes?

@minar09
Copy link
Owner

minar09 commented Feb 20, 2021

Actually, any image segmentation network should originally generate grayscale output, so please check the actual network output.

@yashp0103
Copy link

Hi @santosh9sanjeev @minar09

Thanks for the information!
I'm able to test on custom image with some manual work on JSON keypoints. The openpose generated JSON file has two different fields like face_keypoints_2d and face_keypoints_3d. And In cp_vton_plus model JSON contains field like face_keypoints only. So, I just want to confirm that the JSONs present in model are manually modified or directly generated from openpose?

If it is directly generated then can you help me to understand what changes you had done or what run cmd you had used.

Thanks,
Yash

@minar09
Copy link
Owner

minar09 commented Jun 25, 2021

@yashp0103 , CP-VTON+ model directly uses openpose-generated keypoints, no modification is needed. And the face_keypoints are not used in this model, you can ignore them. If your joints have pose_keypoints_2d, you can directly use it by just changing this line: https://github.com/minar09/cp-vton-plus/blob/master/cp_dataset.py#L152 to pose_data = pose_label['people'][0]['pose_keypoints_2d']

@yashp0103
Copy link

Thank you so much @minar09
It's working!

@Amin-asadii
Copy link

Thank you so much minar09 for your help...
I use LIP_JPPNet for image-pars.But the results are not good at all. Please see for this image:
000001_01
The following results are obtained:
000001_0_vis
000001_0
If I was expecting this output:
000001_12 (1)
000001_12 (2)
Thank you very much for your help:
Thank

@Amin-asadii
Copy link

00989174286532 Whats App

@minar09
Copy link
Owner

minar09 commented Sep 12, 2021

@Amin-asadii , you can try CIHP-PGN pre-trained model for parsing, which should give better results.

@Amin-asadii
Copy link

Python
91.0%

MATLAB
9.0%

Owner
@minar09 Do I have to have matlab installed?
How should I use it?

@minar09
Copy link
Owner

minar09 commented Sep 12, 2021

@Amin-asadii , no need to install Matlab for CP-VTON+.

@Amin-asadii
Copy link

@minar09 Hello dear friend, thank you for your help
Please attach the openpose COCO-18 Python link

@Amin-asadii
Copy link

Amin-asadii commented Sep 18, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants