Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use W+ space to generate samples from MobileStyleGAN #33

Closed
RahulBhalley opened this issue Feb 12, 2022 · 4 comments
Closed

How to use W+ space to generate samples from MobileStyleGAN #33

RahulBhalley opened this issue Feb 12, 2022 · 4 comments

Comments

@RahulBhalley
Copy link

Hi!

I want to sample images from W+ space using PyTorch checkpoints. But there doesn't seem to exist any argument to generate.py script for that. Could you please guide me regarding this?

The images I sampled from CoreML's W+ space models (using both Mapping and Synthesis) were weirdly in bluish color. These models were exported using --export-w-plus argument. I've attached few of them here.

coreml_w_plus_3
coreml_w_plus_2
coreml_w_plus_0

When I use W space in CoreML models then the samples are colored correctly.

0
2
4

Any help is highly appreciated!

Regards
Rahul Bhalley

@bes-dev
Copy link
Owner

bes-dev commented Feb 12, 2022

@RahulBhalley how do you make samples from W+ space? W+ is good for optimization, but it is much worse for random sampling than W-space.

@RahulBhalley
Copy link
Author

@RahulBhalley how do you make samples from W+ space?

@bes-dev With the --export-w-plus, I get MappingNetwork with (23, 512) I/O tensors shapes. I pass z sampled from Gaussian distribution with this shape to MappingNetwork. The output is then fed to SynthesisNetwork with un-squeezed zeroth-dimension. Finally, I produce the image using tensor_to_img() function from your code.

W+ is good for optimization, but it is much worse for random sampling than W-space.

Interesting, I didn't know about that.

My actual goal is to perform face manipulation using W+ space. I have few queries before I proceed with that code just to save my time:

Q1. So, if I use encoder from ReStyle paper then will MobileStyleGAN be able to reconstruct the image correctly? I have this query because their encoder was trained using StyleGAN2's generator as loss function. (I'm aware that I've to add latent average tensor to W+ latent code before feeding it to the generator.)

Q2. Regarding Q1, should I use W space MappingNetwork and then tile the generated W latent code to match the input for SynthesisNetwork's W+ space input or should I use W+ space MappingNetwork directly?

Thanks for quick response! I love ❤️ your work. It's really amazing to be able to run compressed StyleGAN2 on-device!

Regards
Rahul Bhalley

@bes-dev
Copy link
Owner

bes-dev commented Feb 15, 2022

@RahulBhalley Unfortunately, our W+ space is not exactly equal to W+ space of the original StyleGAN2. So, I'm not sure that models like ReStyle would works good with MobileStyleGAN without finetuning. For second question I think it will better to check the both ways.

@RahulBhalley
Copy link
Author

Okay, I'll try fine-tuning ReStyle's encoder using MobileStyleGAN.
I'm closing this issue. If I need help, I'll open an issue related to fine-tuning ReStyle's encoder.

Thanks for conversation! 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants