Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running inference on a CPU #50

Closed
cyprian opened this issue Dec 3, 2020 · 3 comments
Closed

Running inference on a CPU #50

cyprian opened this issue Dec 3, 2020 · 3 comments

Comments

@cyprian
Copy link

cyprian commented Dec 3, 2020

Thank you for sharing this code.

In the README you say it might be possible to run this on a CPU. I am specifically interesting in running the inference on a CPU. Can you point out what that needs to be changed in order to adapt inference to run on a CPU?

Also a side question on parameters tuning for training.
What parameters should I tune in order to improve the ability for the model to include more details like facial marks (freckles, moles, wrinkles). It seams my model trained below parameters is omitting these details.
--lpips_lambda=0.8
--l2_lambda=1
--id_lambda=0
--w_norm_lambda=0.005
--lpips_lambda_crop=0.8

Thanks again!

@yuval-alaluf
Copy link
Collaborator

Hi @cyprian ,
Please see issue #18 for more details about running inference on CPU.
From the comments in that issue, it seems like sdhnshu's fork of rosinality's StyleGAN2 code has support for inference on CPU, although I have not tested it myself.

Regarding your second question, what task are you trying to train on?

@cyprian
Copy link
Author

cyprian commented Dec 4, 2020

Thank you for a fast reply.

I am trying to use this network for photo denosing. Specifically from light flair removal from photos.
I have paired images (with light noise/without noise).

The flair is removed well, but the generated image omits many details.
In a sense the unique identity of an input image is not detailed enough on an output image.

@yuval-alaluf
Copy link
Collaborator

I am not sure how many iterations you trained your model, but to give you an idea, we trained our encoder for approximately 300,000 iterations with a batch size of 8. Based on some of the results we showed in our README, we were able to preserve wrinkles. However, preserving very small details such as moles may still be difficult.
Regarding the parameters, if you are working on facial images, the use of the ID loss is significant in getting good reconstruction results. Other than that, I would explore setting --w_norm_lambda=0 since the regularization loss will push images closer to the average latent vector and could therefore result in a loss of small details.
Finally, the lpips_lambda_crop loss was used in our frontalization task in order to provide a larger weight to the inner region of the image. I am not sure if this loss makes sense in your task, but to reduce unknowns, I would start off with omitting this loss (i.e. setting it to 0).
I hope you find these recommendations useful.

@cyprian cyprian closed this as completed Dec 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants