Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some problems encountered during training #283

Closed
strawberrieszd opened this issue Aug 16, 2022 · 3 comments
Closed

Some problems encountered during training #283

strawberrieszd opened this issue Aug 16, 2022 · 3 comments

Comments

@strawberrieszd
Copy link

Hello, thank you very much for your research, when I trained the 16,000 images dataset with 2 GPUs with 24G memory, the training parameters were as follows. After a week of training the test images are as follows, I am wondering if any parameter is set wrong.
image
image

@yuval-alaluf
Copy link
Collaborator

Most of the parameters seem fine. I would turn off the ID loss since its designed for human faces and will therefore probably not work well on your domain.
However, you definitely have some other problems in the training. The code as is doesn't support multi-GPU training and you mentioned that you ran on two GPUs. Does this mean that you made changes to the code?
Did you try running a sanity check by trying to overfit on say 10 images? To make sure that everything works as expected?

@strawberrieszd
Copy link
Author

strawberrieszd commented Aug 21, 2022 via email

@yuval-alaluf
Copy link
Collaborator

I don't see any image attached.
In any case, I would try doing what I recommended and try overfitting on a very small set of images.
Another important thing to verify is that the average image of your generator looks reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants