You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for the great work!
I am trying to reproduce the training results.
I used the default batch size and run the lama-fourier model on 4 V100 GPUs for 40 epochs. The training takes about 12 hrs, and the results on the training dataset look very nice, but it went wrong in the testing and other validation images. There will be some texture artifacts like this.
I wonder what will be the reason: batch size, training epoch or others?
If I set the batch size to 10, the training time on lama-fourier will be too long.
Usually how long should the training be finished on 4 V100 GPUs? (1 day or 1 week) and what batch size should be set to make the model generalized well to other images?
Thanks so much!
The text was updated successfully, but these errors were encountered:
I wonder what will be the reason: batch size, training epoch or others?
What dataset do you use? None of the above parameters alone should break training that severely.
The configs in this repository contain the actuall parameter values we used to train the models for paper - so please refer to them to get exact values.
The training takes about 12 hrs
12 hours per epoch or per full training? If you mean 12h per full training, this is too little.
We trained models for the paper for 1M iterations on 3xV100 with the total batch size of 30. Training of a single model on Places should take approximately 1 week. Big Lama Fourier trained for approximately two weeks. Celeba is much smaller and less diverse than Places and the convergence on Celeba is much faster, approximately 1-2 days.
If I set the batch size to 10, the training time on lama-fourier will be too long.
Please be aware that with DDP, data.batch_size sets the batch size per gpu (not the total batch size). If you mean that 10 is the total batch size - then it is too small. We found that the quality degrades when BS<20. But small batch size alone should not break training that severely.
Thanks for your reply, and I found I did not update the codebase (the old code had some errors in preparing the training data) so the training data was not sufficient. I am redoing everything again.
I also found the config files for your pre-trained model, and those are very helpful.
Hi, thanks for the great work!
I am trying to reproduce the training results.
I used the default batch size and run the lama-fourier model on 4 V100 GPUs for 40 epochs. The training takes about 12 hrs, and the results on the training dataset look very nice, but it went wrong in the testing and other validation images. There will be some texture artifacts like this.
I wonder what will be the reason: batch size, training epoch or others?
If I set the batch size to 10, the training time on lama-fourier will be too long.
Usually how long should the training be finished on 4 V100 GPUs? (1 day or 1 week) and what batch size should be set to make the model generalized well to other images?
Thanks so much!
The text was updated successfully, but these errors were encountered: