-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adaptive Pixel Intensity Loss generated NaN values while training #9
Comments
The entire stacktrace of the error |
Hi, |
But how did that run completely fine when using BCE loss alone |
I don't exactly know about the dataset you used so I'm not sure what the problem is. |
Actually using |
How about this approach? Does it work? |
@Karel911 can you help me with removing the edge generation parts? because i am facing a similar issue. |
I also curious about which parts make this issue. Thanks. |
Thanks, Let me check this out |
Was training on custom human dataset.
Batch Size = 8
No of training images = 3800
No of steps trained before showing error = 75
After 75th step It generated an error:
The model trained successfully when using BCE loss.
We even checked for NaN values using
torch.autograd.set_detect_anamoly(True)
But it returned False stating that no NaN values were foundThe text was updated successfully, but these errors were encountered: