Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

viz_image mode and load noise vectors #4

Closed
denabazazian opened this issue Apr 22, 2021 · 7 comments
Closed

viz_image mode and load noise vectors #4

denabazazian opened this issue Apr 22, 2021 · 7 comments

Comments

@denabazazian
Copy link

denabazazian commented Apr 22, 2021

There is one problem in the generator.py file that probably either in line #91 default mode should be written as viz_imgs or in line #127 should be written as viz_image. One of these two lines should be modified to visualize the images.
Also, in the generator.py file, in line #124, g_ema is not defined for the cases that truncation is less than one.
Furthermore, I am wondering if it would be possible to evaluate the model with the input images instead of the generated ones.

Regarding load_noise option, I am wondering how can we make the noise vectors or if there is any available link to download the noise vectors as noise.pt.

Thanks for your great work.

@utkarshojha
Copy link
Collaborator

Thanks for pointing out that issue, I've fixed the generate.py script. I've also added the noise used in our paper, noise.pt.

As for the evaluation part, I didn't understand what you meant by using real images instead of generated ones. Can you elaborate?

@denabazazian
Copy link
Author

Many thanks for your quick reply and updating the repository.

Regarding the using of real images, I meant that the adapted generator is just capable of adapting the images to the target domain which were previously generated by StyleGAN model. For instance, would it be possible to generate a caricature of a real input image - not a random generated image from StyleGAN?

@utkarshojha
Copy link
Collaborator

One possible way to do something like that would be to first embed a real image into the source GAN for FFHQ, and use the resulting latent vector as input for your adapted GAN. Ideally, since we expect the correspondence to be preserved, the resulting image from the adapted GAN should be corresponding to the real image used as input initially.

@denabazazian
Copy link
Author

denabazazian commented Apr 26, 2021

Thanks a lot for your reply. I have used projector.py to get the latent vector of an input image, but the resulting image does not exactly correspond to the real image. The latent vector from that code changes the view-point and some features of the input image. Could you please advise me how can I get the latent vector of an input image through the StyleGAN2 architecture? Thanks.

@utkarshojha
Copy link
Collaborator

You could use something like Image2StyleGAN, or a more recent version. Keep in mind that it will be difficult to embed an arbitrary image, i.e. the image should likely contain the main object in the center etc. Basically, the test image should roughly follow the properties of the real images used to train the GAN

@denabazazian
Copy link
Author

That was a very helpful advice, thanks a lot!

@endlesswho
Copy link

mething like Ima

HI, if you use projector.py to gen latent code and noise, during generate stage, put the noise tensor into the model, the result should be what you want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants