Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting cls_logits NaN of Inf during training #1683

Closed
AceMcAwesome77 opened this issue Mar 31, 2024 · 1 comment
Closed

Getting cls_logits NaN of Inf during training #1683

AceMcAwesome77 opened this issue Mar 31, 2024 · 1 comment

Comments

@AceMcAwesome77
Copy link

I am training this retinanet 3D detection model with mostly the same parameters as the example in this repo, except with batch_size in the config = 1 because many image volumes are smaller than the training patch size. During training, I am getting this error at random, several epochs into the training:

Traceback of TorchScript, original code (most recent call last):
File "/home/mycomputer/.local/lib/python3.10/site-packages/monai/apps/detection/networks/retinanet_network.py", line 130, in forward
if torch.isnan(cls_logits).any() or torch.isinf(cls_logits).any():
if torch.is_grad_enabled():
raise ValueError("cls_logits is NaN or Inf.")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
warnings.warn("cls_logits is NaN or Inf.")
builtins.ValueError: cls_logits is NaN or Inf.

On the last few training attempts, this failed on epoch 6 on the first two attempts, then failed on epoch 12 on the third attempt. So it can make it though all the training data without failing on any particular case. Does anyone know what could be causing this? If it's exploding gradients, is there something built into MONAI to clip these and prevent the training from crashing? Thanks!

@KumoLiu
Copy link
Contributor

KumoLiu commented Apr 1, 2024

Hi @AceMcAwesome77,

The error message you're encountering, "cls_logits is NaN or Inf.", is letting you know that at some point in your training, the cls_logits tensor contains a Not a Number (NaN) or Infinity (Inf).
This can occur due to various reasons. This could come from a learning rate that's too high, instabilities in your numerical operations, uninitialized variables, or it could also be a problem with the specific data you're inputting into the model. It's a sign that the model is diverging, and gradients are getting out of control, which can also stem from exploding or vanishing gradients.
You indeed can attempt to mitigate this issue using gradient clipping, which can help ensure gradients never exceed a certain threshold. However, applying gradient clipping doesn't guarantee to resolve the root cause of the problem.
I would recommend looking at your training process more holistically. Inspect the learning rate, look for possible issues in the data, try normalizing the inputs, or use different weight initialization techniques.

Hope it helps, thanks.

@Project-MONAI Project-MONAI locked and limited conversation to collaborators Apr 1, 2024
@KumoLiu KumoLiu converted this issue into discussion #1684 Apr 1, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants