Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training batch size and other parameters? #62

Closed
yzhouas opened this issue Dec 4, 2021 · 2 comments
Closed

Training batch size and other parameters? #62

yzhouas opened this issue Dec 4, 2021 · 2 comments

Comments

@yzhouas
Copy link

yzhouas commented Dec 4, 2021

Hi, thanks for the great work!
I am trying to reproduce the training results.
I used the default batch size and run the lama-fourier model on 4 V100 GPUs for 40 epochs. The training takes about 12 hrs, and the results on the training dataset look very nice, but it went wrong in the testing and other validation images. There will be some texture artifacts like this.

image

I wonder what will be the reason: batch size, training epoch or others?
If I set the batch size to 10, the training time on lama-fourier will be too long.

Usually how long should the training be finished on 4 V100 GPUs? (1 day or 1 week) and what batch size should be set to make the model generalized well to other images?

Thanks so much!

@windj007
Copy link
Collaborator

windj007 commented Dec 6, 2021

Hi!

I wonder what will be the reason: batch size, training epoch or others?

What dataset do you use? None of the above parameters alone should break training that severely.

The configs in this repository contain the actuall parameter values we used to train the models for paper - so please refer to them to get exact values.

The training takes about 12 hrs

12 hours per epoch or per full training? If you mean 12h per full training, this is too little.

We trained models for the paper for 1M iterations on 3xV100 with the total batch size of 30. Training of a single model on Places should take approximately 1 week. Big Lama Fourier trained for approximately two weeks. Celeba is much smaller and less diverse than Places and the convergence on Celeba is much faster, approximately 1-2 days.

If I set the batch size to 10, the training time on lama-fourier will be too long.

Please be aware that with DDP, data.batch_size sets the batch size per gpu (not the total batch size). If you mean that 10 is the total batch size - then it is too small. We found that the quality degrades when BS<20. But small batch size alone should not break training that severely.

@yzhouas
Copy link
Author

yzhouas commented Dec 6, 2021

Thanks for your reply, and I found I did not update the codebase (the old code had some errors in preparing the training data) so the training data was not sufficient. I am redoing everything again.

I also found the config files for your pre-trained model, and those are very helpful.

Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants