Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gan inversion code release #3

Open
HollyDQWang opened this issue Mar 1, 2022 · 3 comments
Open

gan inversion code release #3

HollyDQWang opened this issue Mar 1, 2022 · 3 comments

Comments

@HollyDQWang
Copy link

Will the code for GAN inversion be released?

@JC1DA
Copy link

JC1DA commented Sep 13, 2022

Also interested in the GAN inversion to generate shapes from custom images. @XingangPan is there any ETA? Thanks

@XingangPan
Copy link
Owner

I have just uploaded the scripts for the GAN inversion part. You may refer to this example: https://github.com/XingangPan/ShadeGAN/blob/main/scripts/inversion.sh. You need to pull the new commit and download the model weights again.
To invert your own image, you need to first align it to match the celeba dataset. You can do it either manually or use the celeba align tool at https://github.com/XingangPan/ShadeGAN/blob/main/inversion/align_face_celeba.m. (This requires five landmarks as input, which can be obtained using other landmark detection tools like dlib) For the second way, you also need to add --crop to inversion.py

@zhanghongyong123456
Copy link

I have just uploaded the scripts for the GAN inversion part. You may refer to this example: https://github.com/XingangPan/ShadeGAN/blob/main/scripts/inversion.sh. You need to pull the new commit and download the model weights again. To invert your own image, you need to first align it to match the celeba dataset. You can do it either manually or use the celeba align tool at https://github.com/XingangPan/ShadeGAN/blob/main/inversion/align_face_celeba.m. (This requires five landmarks as input, which can be obtained using other landmark detection tools like dlib) For the second way, you also need to add --crop to inversion.py

I input a portrait, a background image, then how to get the background image illumination information as the basis of portrait re-light

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants