New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about the loss function #40
Comments
when I rained with L1 loss, it was also big. |
All of three loss is big, what is the problem? |
Same to me. My loss is about "e+4". How about you? |
My loss is about "e+4" too |
Cause the loss is reduced by sum, and when you try to divide it by (GT_sizeGT_sizebatch_size) you'll find the loss is just the common case like 1e-2. You can replace the reduce function by 'mean'. |
I have another problem, which is that the loss function doesn't go down, does anybody else have this problem? |
@zzzzwj has pointed it out. CharbonnierLoss is in the @LI945 During the training, you may observe the loss decreases very slowly. But if you evaluate the checkpoints, the performance (PSNR) actually increases as the training goes. |
I met the same problem as @LI945 mentioned. When I trained with my own datasets, the loss decreases very slow. When I train with SISR model (for example, EDSR), the psnr increases very fast which can reach almost the best value around 37.0 psnr in 20~30 epochs. However when I train with EDVR, using the raw training code, the psnr increases fast in first 10 epochs reaching ~33.0 psnr, then it's psnr value seems to be stable which means in next 20 epochs, the psnr value just inceases less than 1.0. So have you met the same problem when you train the REDS or Vimeo90K datasets? And can I have your training log? Hope for your reply @xinntao . |
@zzzzwj I will upload a training log example tomorrow. Actually, 1) we use a different training scheme with restarts, which improves the performance. 2) We usually measure in iteration rather than epoch. |
Well, thanks a lot. |
Hi, when I trained with the CharbonnierLoss , the loss is very very big, but when I trained with L1 loss, it is normal, what caused this phenomenon, could you give me some advice?
The text was updated successfully, but these errors were encountered: