You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a problem with nan after inf loss in random step .
It is doubtful whether the low (or nan, -nan) phenomenon of iou and giou is the cause of the problem.
We looked for similar situations in Isue, but we don't know the exact cause
Increased loss due to object detection too small for image
=> Because the loss cost is deliberately saved by delta_yolo_box figures
The delta value was given as float max, but NaN was not generated, as the loss continued to inflate.
Hi.
I have a problem with nan after inf loss in random step .
It is doubtful whether the low (or nan, -nan) phenomenon of iou and giou is the cause of the problem.
We looked for similar situations in Isue, but we don't know the exact cause
=> Because the loss cost is deliberately saved by delta_yolo_box figures
The delta value was given as float max, but NaN was not generated, as the loss continued to inflate.
ref ) #930, #2783
Batch Normalize is not applied and is greatly influenced by certain loss values
=> Even when I gave it to batch 1, NaN didn't occur
Unstable loss calculation using Multi GPU early in learning
ref ) The Difference of AlexeyAB/Darknet and Pjreddie/Darknet #969 , avg loss = -nan when tensor cores are used #2783
Below is the log to track the problem.
[delta_yolo_box delta] is a record of the parameters used to calculate the delta loss of the box.
Inf occurs at 889 steps and NaN continues to occur after that.
We sincerely hope to find out the exact cause and solve it.
Thanks...
The text was updated successfully, but these errors were encountered: