New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to perform StyleGAN inversion? #17
Comments
Might be a solution |
@MDR-EX1000 Thanks for the answer. That is indeed a solution to StyleGAN model. @damonzhou Basically, the GAN inversion problem can be solved with "fixing the GAN model, and optimizing the latent code with back propogation to minimize the pixel-wise reconstruction loss or perceptual loss". You can easily achieve this by setting the latent code as the only trainable parameter and running backward function of the deep generator. Hope this answer helps. |
@MDR-EX1000 @ShenYujun Thanks for the information, I'll try it later and test on Interface-GAN. |
It's not seamless, but I found the following pipeline to work as proof of concept:
That worked for me. You can play with the I suppose that a bespoke mapping working with InterFaceGAN could produce better results. The author of StyleGAN-encoder made a good first step and their code could be an inspiration on how to tackle such task. It's pretty damn incredible what you guys (authors of all ProGAN, StyleGAN, InterFaceGAN and StyleGAN-encoder) achieved, I praise you and admire you. |
Hello, can i ask u a question? picture is strange, like this: |
Me too. Please inform me if you find any solution to this problem! |
Hi Yujun,
In the paper you claimed that it must use GAN inversion method to map real images to latent codes, and StyleGAN inversion methods are much better, are there documents introducing how to do the inversion?
Any comments are appreciated! Best Regards.
The text was updated successfully, but these errors were encountered: