Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find (shadow) relevant features in the latent space? #1

Open
SleyAI opened this issue Dec 20, 2021 · 3 comments
Open

How to find (shadow) relevant features in the latent space? #1

SleyAI opened this issue Dec 20, 2021 · 3 comments

Comments

@SleyAI
Copy link

SleyAI commented Dec 20, 2021

Hello, I'm currently trying to implement the first step of your proposed algorithm (input: portrait image, face mask, output: shadow free image). I successfully created the face mask with the Bisenet and removed the background from the portrait image. In the next step I received the latent vectors from StyleGAN.

My question now is: How do you explore the latent space to find the relevant parts of the vector which control the shadows? You create K random latent vectors but what is your strategy? How many values do you manipulate in every sample? Any hint would be very helpful to me! Thanks in advance.

@YingqingHe
Copy link
Owner

Hi.
(1) Our method does not need to find the shadow part in the latent space. Instead, we aim to find the latent vector which represent the clean face with no shadow. In order to do this, we propose a 3-stage optimization process to get the latent vector only for the shadow-free face (which details can be found in our paper).
(2) In stage1, we create 500 latent vectors only for a better initialization of the latent vector, and we choose a best one which generated image has the lowest perceptural loss with the input image.
(3) Actually our method is not based on manipulation in latent space. We explicitly model the shadow generation process via the color matrix and the shadow mask. More details can be checked in our paper. Thanks!

@SleyAI
Copy link
Author

SleyAI commented Jan 19, 2022

Hello, thank you very much for your reply! Can you explain how you receive the Initial shadow free image after step 1? In your optimization process in step 1 you calculate the LPIPS loss between the generated images and the original image (which contains shadows). How can you generate a shadow free image by comparing with a shadowed image?

@YingqingHe
Copy link
Owner

YingqingHe commented Jan 19, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants