-
Notifications
You must be signed in to change notification settings - Fork 571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running inference on a CPU #50
Comments
Hi @cyprian , Regarding your second question, what task are you trying to train on? |
Thank you for a fast reply. I am trying to use this network for photo denosing. Specifically from light flair removal from photos. The flair is removed well, but the generated image omits many details. |
I am not sure how many iterations you trained your model, but to give you an idea, we trained our encoder for approximately 300,000 iterations with a batch size of 8. Based on some of the results we showed in our README, we were able to preserve wrinkles. However, preserving very small details such as moles may still be difficult. |
Thank you for sharing this code.
In the README you say it might be possible to run this on a CPU. I am specifically interesting in running the inference on a CPU. Can you point out what that needs to be changed in order to adapt inference to run on a CPU?
Also a side question on parameters tuning for training.
What parameters should I tune in order to improve the ability for the model to include more details like facial marks (freckles, moles, wrinkles). It seams my model trained below parameters is omitting these details.
--lpips_lambda=0.8
--l2_lambda=1
--id_lambda=0
--w_norm_lambda=0.005
--lpips_lambda_crop=0.8
Thanks again!
The text was updated successfully, but these errors were encountered: