Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Function error #4

Closed
jefferyZhan opened this issue Mar 31, 2021 · 4 comments
Closed

Function error #4

jefferyZhan opened this issue Mar 31, 2021 · 4 comments

Comments

@jefferyZhan
Copy link

Hi, when i used the EQL loss function, it seems there is an error in the threshold function.
Take Lvis v0.5 as example:
The shape of pred_class_logits is 1231, including a background class. But the shape of the return of get_image_count_frequency is 1230, excluding the background class. Then the indexerror occured. Is it because of version or exactly an error?
It can be fixed by expanding the freq_info, setting the backgound frequence.

@tztztztztz
Copy link
Owner

Hi, the shape of pred_class_logits in EQL should be 1230 instead of 1231 since EQL uses the sigmoid loss function and does not include an objectiveness branch.

Can you give more information about the error you came across? For example, which codebase and config did you use?

@jefferyZhan
Copy link
Author

I used the faster_rcnn_r50_fpn_1x_coco.py(config), LVIS v0.5 dataset, and the mmdetection. Also, I modified the rio bbox head with EQL with sigmoid function.
Without modifying the freq_info, the error report is:
weight[self.freq_info < self.lambda_] = 1
IndexError: The shape of the mask [1230] at index 0 does not match the shape of the indexed tensor [1231] at index 0
after modifying the freq_info, all the loss became "nan".
A little bit wired.

@tztztztztz
Copy link
Owner

To use EQL in your own repo, please keep these in mind.

  1. change the shape of fc-cls layer to (num_classes) instead of (num_classes + 1)

    self.fc_cls = nn.Linear(self.cls_last_dim, self.num_classes)

  2. init parameters with prior bias to avoid "nan"

    if self.use_sigmoid:
    bias_cls = bias_init_with_prob(0.001)
    normal_init(self.fc_cls, std=0.01, bias=bias_cls)

  3. adopt your activation ways for testing

    scores = F.sigmoid(cls_score)
    dummpy_prob = scores.new_zeros((scores.size(0), 1))
    scores = torch.cat([scores, dummpy_prob], dim=1)

However, I recommend that use this repo first and get familiar with the equalization losses

@jefferyZhan
Copy link
Author

Thanks a lot~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants