Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training #7

Closed
shengkelong opened this issue Aug 29, 2021 · 15 comments
Closed

About training #7

shengkelong opened this issue Aug 29, 2021 · 15 comments
Labels
solved ✅ bug is fixed or problem is solved

Comments

@shengkelong
Copy link

Thank you for your work. I tried to train SwinIR but in my process of training swinir, I found that although the small swinir training is smooth, the loss often suddenly doubles when dim change to 180. Because of the memory problem, my batch_zie=16 lr=1e-4, may I have any special skills to let Is the training stable?

@JingyunLiang
Copy link
Owner

JingyunLiang commented Aug 29, 2021

Do you mean that training SwinIR (middle size, dim=180) sometime have doubled loss? I don't think it is a problem. When some images in a training patch is hard to reconstruct, the loss of that batch would be large. If you look at the PSNR on the validation set, the model converges smoothly.

In our implementation, all settings are similar to CNN-based SR models and we did not use any special tricks. Detailed settings can be found in the supplementary. Training codes will be released in KAIR in a few days.

@shengkelong
Copy link
Author

I don’t mean that the loss of a batch has suddenly doubled, but the average loss of 100 batches has doubled, and psnr will also drop close to 1db and recover after a few epochs.

@JingyunLiang
Copy link
Owner

We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.

We did not notice any sudden large PSNR drop on the validation set.
swinir_div2k_x2_loss_psnr

@shengkelong
Copy link
Author

Thank you, I will check my code and explore whether the slight parameter difference will have a big impact。

@hcleung3325
Copy link

hcleung3325 commented Aug 30, 2021

We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.

We did not notice any sudden large PSNR drop on the validation set.
swinir_div2k_x2_loss_psnr

Hi thank you for your work.
May I know that does the SwinIR (for image SR) need to trained as GAN with discriminator?
If I just train for image SR, can I just train it without GAN?
Thanks.

@JingyunLiang
Copy link
Owner

It depends on what you want. If you just care about PSNR, using pixel loss for training is enough (first stage). PSNR provides a good quantitative metric for comparing different methods, but pixel loss often does not have a good visual quality.

If you want better visual quality, you should fine-tine the model from the first stage using a combination of pixel loss, perceptual loss and GAN loss, but it will decrease the PSNR.

By the way, I might know why you suffer from a sudden large drop of PSNR. GAN training is very unstable. Generally you should fine-tune the model from the first stage, instead of training it from scratch. The EMA strategy can also help stabilize the convergence. Note that PSNR is also not a good metric when you are training towards good visual quality.

@yzcv
Copy link

yzcv commented Sep 6, 2021

pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.

Hi, @JingyunLiang

Thanks for the loss plot. May I ask based on your experience, is it normal for a transformer framework that the training loss oscillates seriously? I am currently training a transformer and the loss seems just fluctuate repeatedly and there is no trend of convergence. So do you think this is a normal phenomenon for most transformers?

Thanks very much.

@JingyunLiang
Copy link
Owner

I don't think so. There is no such problems as you can see in Fig.3 (f) of the paper. By the way, our training code will be released in 1-2 days. Please use that for training. Thank you.

@yzcv
Copy link

yzcv commented Sep 6, 2021

Yes, I see. Fig. 3(f) is the PSNR plot. Just as shown in your earlier reply, the PSNR is stable. But the L1 loss oscillates. I am confused about the fluctuation of the L1 loss. Thanks a lot. @JingyunLiang

@JingyunLiang
Copy link
Owner

PSNR and L1 loss on validation loss are highly related because PSNR has a MSE(pred, gt) term. If the validation PSNR is stable, the validation loss should also be stable. The training loss is shown in the top figure of my previous answer, it may fluctuate a bit because each batch has different images (some of them are hard to super-resolve).

We trained SwinIR (middle size, dim=180) for 500K iterations with batch_size=32. The learning rate is initialized as 2e-4 and halved at [250K, 400K, 450K, 475K]. We use Adam optimizer (betas=[0.9, 0.99]) without weight decay. The loss is the mean L1 pixel loss. The training loss and PSNR on validation set (Set 5) are attached as follows.

We did not notice any sudden large PSNR drop on the validation set.
swinir_div2k_x2_loss_psnr

@yzcv
Copy link

yzcv commented Sep 6, 2021

Thanks so much for your explanation! In this case, have you tried to increase the batch size to reduce this training loss fluctuation? Does a large batch size alleviate this instability?

@JingyunLiang
Copy link
Owner

No, I always use batch_size=32 (so we only need 500k iterations). You can try it later on our training code.

@yzcv
Copy link

yzcv commented Sep 6, 2021

I see. Thank you very much. I think 32 is large enough for the image-to-image task due to the huge cost of the transformer.

@JingyunLiang
Copy link
Owner

@JingyunLiang JingyunLiang added the solved ✅ bug is fixed or problem is solved label Sep 16, 2021
@JingyunLiang
Copy link
Owner

Feel free to open it if you have more questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved ✅ bug is fixed or problem is solved
Projects
None yet
Development

No branches or pull requests

4 participants