You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason will be displayed to describe this comment to others. Learn more.
@AlexeyAB, yes the ratios are equal, minus the obj_normalizer that is 20% smaller. But I'm still intrigated on why the scales are so huge on Yolo-FastestV2 implementation... that's why I went here to check if your implementation had some intermediate scaling or not.
Why the sum is not equal to 1 (or 100) in any of the cases?
0.05+0.5+0.4=0,95
3,2+32+25,6=60,8
3,2+64+32=99,2
I was thinking these coefficients like weights to force the network to learn more about a specific task (iou, cls or obj), but it seems it's a scaling factory for each loss (independently)?
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexeyAB, where the values for iou_normalizer, cls_normalizer and obj_normalizer came from? I found another implementation with very large scaling:
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Grabber
Where did you find it?
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexeyAB, here: https://github.com/dog-qiuqiu/Yolo-FastestV2/blob/b27b667a8c6e79e8003d9265cfecaa9a40e4bc2e/utils/loss.py#L203
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is similar to:
While they use: https://github.com/dog-qiuqiu/Yolo-FastestV2/blob/b27b667a8c6e79e8003d9265cfecaa9a40e4bc2e/utils/loss.py#L203
So the ratio is similar, just LR is much higher, but maybe they use some another additional scales.
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexeyAB, yes the ratios are equal, minus the
obj_normalizerthat is 20% smaller. But I'm still intrigated on why the scales are so huge on Yolo-FastestV2 implementation... that's why I went here to check if your implementation had some intermediate scaling or not.Why the sum is not equal to 1 (or 100) in any of the cases?
I was thinking these coefficients like weights to force the network to learn more about a specific task (iou, cls or obj), but it seems it's a scaling factory for each loss (independently)?
359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't be equal to 1 or 100. You can use any value.
Instead of
you can use
iou_normalizer + cls_normalizer + obj_normalizer = 1359001dThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexeyAB, on how scaling the loss or learning_rate or both may affect each other: https://stats.stackexchange.com/a/395443