New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use W+ space to generate samples from MobileStyleGAN #33
Comments
@RahulBhalley how do you make samples from W+ space? W+ is good for optimization, but it is much worse for random sampling than W-space. |
@bes-dev With the
Interesting, I didn't know about that. My actual goal is to perform face manipulation using W+ space. I have few queries before I proceed with that code just to save my time: Q1. So, if I use encoder from ReStyle paper then will MobileStyleGAN be able to reconstruct the image correctly? I have this query because their encoder was trained using StyleGAN2's generator as loss function. (I'm aware that I've to add latent average tensor to W+ latent code before feeding it to the generator.) Q2. Regarding Q1, should I use W space MappingNetwork and then tile the generated W latent code to match the input for SynthesisNetwork's W+ space input or should I use W+ space MappingNetwork directly? Thanks for quick response! I love ❤️ your work. It's really amazing to be able to run compressed StyleGAN2 on-device! Regards |
@RahulBhalley Unfortunately, our W+ space is not exactly equal to W+ space of the original StyleGAN2. So, I'm not sure that models like ReStyle would works good with MobileStyleGAN without finetuning. For second question I think it will better to check the both ways. |
Okay, I'll try fine-tuning ReStyle's encoder using MobileStyleGAN. Thanks for conversation! 🙂 |
Hi!
I want to sample images from W+ space using PyTorch checkpoints. But there doesn't seem to exist any argument to
generate.py
script for that. Could you please guide me regarding this?The images I sampled from CoreML's W+ space models (using both Mapping and Synthesis) were weirdly in bluish color. These models were exported using
--export-w-plus
argument. I've attached few of them here.When I use W space in CoreML models then the samples are colored correctly.
Any help is highly appreciated!
Regards
Rahul Bhalley
The text was updated successfully, but these errors were encountered: