-
Notifications
You must be signed in to change notification settings - Fork 614
Description
RTMDet use ltrb as the output, which should all be non-negative. otherwise when transform ltrb to x1 y1 x2 y2, there could be a situation where x2 < x1.
mmdet use exp to ensure this, see in https://github.com/open-mmlab/mmdetection/blob/cfd5d3a985b0249de009b67d04f37263e11cdf3d/mmdet/models/dense_heads/rtmdet_head.py#L143
reg_dist = scale(self.rtm_reg(reg_feat).exp()).float() * stride[0]but it seems to be missing in mmyolo. rtm_reg is initialized with normal distribution, and there are no layers like relu or exp following it, so how can we ensure the output are non-negative, especially in the first few iterations of training?
mmyolo/mmyolo/models/dense_heads/rtmdet_head.py
Lines 148 to 151 in 8c4d9dc
| bias_cls = bias_init_with_prob(0.01) | |
| for rtm_cls, rtm_reg in zip(self.rtm_cls, self.rtm_reg): | |
| normal_init(rtm_cls, std=0.01, bias=bias_cls) | |
| normal_init(rtm_reg, std=0.01) |
mmyolo/mmyolo/models/dense_heads/rtmdet_head.py
Lines 172 to 185 in 8c4d9dc
| for idx, x in enumerate(feats): | |
| cls_feat = x | |
| reg_feat = x | |
| for cls_layer in self.cls_convs[idx]: | |
| cls_feat = cls_layer(cls_feat) | |
| cls_score = self.rtm_cls[idx](cls_feat) | |
| for reg_layer in self.reg_convs[idx]: | |
| reg_feat = reg_layer(reg_feat) | |
| reg_dist = self.rtm_reg[idx](reg_feat) | |
| cls_scores.append(cls_score) | |
| bbox_preds.append(reg_dist) |