Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is the loss so big in the training? #51

Closed
kzq666666 opened this issue May 14, 2021 · 6 comments
Closed

Why is the loss so big in the training? #51

kzq666666 opened this issue May 14, 2021 · 6 comments

Comments

@kzq666666
Copy link

Deblurring task

The configuration I changed is as follows:
BATCH_SIZE: 16 TRAIN_PS: 64 VAL_PS: 64
why is the training loss so big as the epoch is 140?
image

@adityac8
Copy link
Collaborator

Hi

You can try using gradient clipping

loss.backward() 
torch.nn.utils.clip_grad_norm_(model_restoration.parameters(), 0.01)
optimizer.step()

Thanks

@FrankLinxzx
Copy link

FrankLinxzx commented May 20, 2021

Hi

You can try using gradient clipping

loss.backward() 
torch.nn.utils.clip_grad_norm_(model_restoration.parameters(), 0.01)
optimizer.step()

Thanks

image

i meet this error too
where to add this code? thx

@adityac8
Copy link
Collaborator

Hi @FrankLinxzx

You can add these lines here.

loss.backward()

Thanks

@FrankLinxzx
Copy link

Hi @FrankLinxzx

You can add these lines here.

loss.backward()

Thanks

so much thx

@bathcenter
Copy link

去模糊任务

我改的配置如下: BATCH_SIZE: 16 TRAIN_PS: 64 VAL_PS: 64 为什么epoch是140,训练损失这么大? 图片
Hello. TRAIN_ PS: 64 VAL_ PS: 64, I know

TRAIN_ PS is the input picture size, Val_ PS does PS mean,

@adityac8
Copy link
Collaborator

Hi @bathcenter

Could you add gradient clipping and see if it is resolved.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants