You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 1, 2021. It is now read-only.
I tested on my local environment, triangular learning rate tends to achieve better accuracy than current 2 steps-decay, even if I add 1 epoch warm up manually.
On the other hand, we found a problem with current learning rate schedulings, we cannot train wider_face object detection network. More precisely, we can train, but the mAP is not good (roughly 40%, which is worse compared to hand tuned 55% mAP.)
I'd like to have some experiment around triangular learning rate and add best configuration as our choice. But unfortunately, I don't have enough time, so I just leave this comment 😢
The text was updated successfully, but these errors were encountered:
Recently I read some articles that uses triangular learning rate or cyclical learning rate.
I tested on my local environment, triangular learning rate tends to achieve better accuracy than current 2 steps-decay, even if I add 1 epoch warm up manually.
On the other hand, we found a problem with current learning rate schedulings, we cannot train wider_face object detection network. More precisely, we can train, but the mAP is not good (roughly 40%, which is worse compared to hand tuned 55% mAP.)
I'd like to have some experiment around triangular learning rate and add best configuration as our choice. But unfortunately, I don't have enough time, so I just leave this comment 😢
The text was updated successfully, but these errors were encountered: