Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the partial UV texture map? #5

Open
AndrewChiyz opened this issue Mar 16, 2021 · 3 comments
Open

How to get the partial UV texture map? #5

AndrewChiyz opened this issue Mar 16, 2021 · 3 comments

Comments

@AndrewChiyz
Copy link

Hi! Thanks for releasing the code.

I have a few questions on how to get the partial UV texture map.

In Section 3,1, as mentioned in the paper, the IUV map of an input image is predicted by using the ResNet-101 based variant of DensePose, and "For easier mapping, the 24 part-specific UV maps are combined to form a single UV Texture map Ts in the format provided in the SURREAL dataset [53] through a pre-computed lookup table." I am trying to train the proposed model on my own dataset, and I have already got IUV maps, but I do not know how to implement such a mapping operation to get the partial UV texture map as described in the paper. Could you please provide some demo code or other GitHub repo to show how to get the partial UV texture map?

Thank you! :)

@AndrewChiyz
Copy link
Author

Hi, I also have some questions about the pre-processing of the dataset.

  1. Why and how clothing images (apparels images) are used? I guess those clothing images are utilized for performing virtual try-on? Maybe? but I am still confused about the process of _map_source_apparel_on_target in the feature_render.py. If apparel images are RGB clothing images in the dataset, how to render the source_texture with a clothing image without UV coordinates? I guess, maybe apparel images are also pre-processed and have been mapped into UV texture maps?

  2. According to the code in the process of _map_source_apparel_on_target (feature_render.py, line 123), background_mask is extracted from the I component of the IUV map where I==0 and apparel_mask is extracted for the I component with indices including 2, 15, 16, 17, 18, 19, 20 and 22 (BTW, what are the semantic meanings of those human body parts?). identity mask is obtained by using torch.logical_not() function on the apparel_mask, (i.e., identity_mask = 1 - apparel_mask). Then the identity_masked = target_image * identity_mask * background_mask, and apparel_masked = mapped_source_feature * apparel_mask * background_mask, then the function will return mapped_apparel_on_target = apparel_masked + identity_masked. However, the background_mask is actually the foreground mask (with torch.logical_not()), and the apparel_mask should be used to mask out the foreground regions which are not in the human body parts. It seems the apparel_mask and background_mask are used to select the regions within the foreground and human body parts but without clothing, and the second term is trying to select the human body parts with clothing. and then merge them together. Is that correct? could you please clarify those operations here, and how could the input apparel image (clothing image) be directly merged to the source feature maps?

Thank you! :)

@rubelchowdhury20
Copy link
Owner

Hi Andrew,

Please visit these links listed below to get the solution for your problem related to the texture map. The first link has a set of notebooks where different applications of densepose are given, and in the second link you will see how to get the texture map along with that, someone shared a library also to achieve that. You can check the source file of that library to understand the total working.

https://github.com/facebookresearch/DensePose/tree/master/notebooks
facebookresearch/DensePose#116

@AndrewChiyz
Copy link
Author

Hi, @rubelchowdhury20, thank you for your reply! I have obtained the partial UV texture map according to the above notebooks, thanks a lot! But I notice that the size of the input image and IUV map can influence the quality of the partial UV Texture map. I wonder what are the sizes of the input image and corresponding UV texture maps in your experiment settings? and whether or not the quality of the partial UV Texture map can impact the final performance?

Thank you! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants