You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to train GIRAFFE HD on a version of CompCar with the removed background at 256x256 resolution. Based on what I read in this other issue, I trained the model for 50k iterations on 8 GPUs with batch size 32, which should be equivalent in terms of total images. However, my generated images are not looking good. These are some examples of my last checkpoint
Do you think this is a problem with the number of iterations and I should continue training for a longer time or is something else in the training config as well?
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Sorry for the late reply. Based on your description and the images you've provided, it seems that the results are indeed not as good as expected.
One possible problem could stem from the way you're removing the background from your dataset. If you're utilizing a binary mask for this task, it could be inadvertently providing the discriminator with a clear signal. This happens because the generator would be struggling to mimic the sharp edges created by the mask, thereby tipping the balance of the training process in favor of the discriminator.
You can confirm if this is indeed the case by monitoring the training loss. If the discriminator's loss is significantly low, it likely indicates that this is the root of the problem.
Hello,
Thanks for the repo!
I'm trying to train GIRAFFE HD on a version of CompCar with the removed background at 256x256 resolution. Based on what I read in this other issue, I trained the model for 50k iterations on 8 GPUs with batch size 32, which should be equivalent in terms of total images. However, my generated images are not looking good. These are some examples of my last checkpoint
Do you think this is a problem with the number of iterations and I should continue training for a longer time or is something else in the training config as well?
Thanks in advance!
The text was updated successfully, but these errors were encountered: