Skip to content

Commit

Permalink
Added yolov4_new.cfg
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexeyAB committed Oct 30, 2021
1 parent 9d40b61 commit 359001d
Show file tree
Hide file tree
Showing 2 changed files with 2,352 additions and 0 deletions.
Loading

7 comments on commit 359001d

@Grabber
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexeyAB, where the values for iou_normalizer, cls_normalizer and obj_normalizer came from? I found another implementation with very large scaling:

lbox *= 3.2
lobj *= 64
lcls *= 32
loss = lbox + lobj + lcls

@AlexeyAB
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Grabber

I found another implementation with very large scaling:

Where did you find it?

@Grabber
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexeyAB
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iou_normalizer=0.05
cls_normalizer=0.5
obj_normalizer=0.4
learning_rate=0.0013

This is similar to:

iou_normalizer=3.2
cls_normalizer=32
obj_normalizer=25.6
learning_rate=0.00002

While they use: https://github.com/dog-qiuqiu/Yolo-FastestV2/blob/b27b667a8c6e79e8003d9265cfecaa9a40e4bc2e/utils/loss.py#L203

iou_normalizer=3.2
cls_normalizer=64
obj_normalizer=32
learning_rate=0.001

So the ratio is similar, just LR is much higher, but maybe they use some another additional scales.

@Grabber
Copy link

@Grabber Grabber commented on 359001d Oct 31, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexeyAB, yes the ratios are equal, minus the obj_normalizer that is 20% smaller. But I'm still intrigated on why the scales are so huge on Yolo-FastestV2 implementation... that's why I went here to check if your implementation had some intermediate scaling or not.

Why the sum is not equal to 1 (or 100) in any of the cases?

0.05+0.5+0.4=0,95
3,2+32+25,6=60,8
3,2+64+32=99,2

I was thinking these coefficients like weights to force the network to learn more about a specific task (iou, cls or obj), but it seems it's a scaling factory for each loss (independently)?

@AlexeyAB
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the sum is not equal to 1 (or 100) in any of the cases?

It shouldn't be equal to 1 or 100. You can use any value.

Instead of

iou_normalizer=0.05
cls_normalizer=0.5
obj_normalizer=0.4
learning_rate=0.0013

you can use iou_normalizer + cls_normalizer + obj_normalizer = 1

iou_normalizer=0.05263
cls_normalizer=0.5263
obj_normalizer=0.421
learning_rate=0.001235

@Grabber
Copy link

@Grabber Grabber commented on 359001d Nov 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexeyAB, on how scaling the loss or learning_rate or both may affect each other: https://stats.stackexchange.com/a/395443

Please sign in to comment.