Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the size consistency loss only affect the size residual? #12

Closed
lilanxiao opened this issue Apr 23, 2021 · 2 comments
Closed

Does the size consistency loss only affect the size residual? #12

lilanxiao opened this issue Apr 23, 2021 · 2 comments

Comments

@lilanxiao
Copy link

Hi, thank you very much for your nice work.

I have a question about the size consistency loss. The function compute_size_consistency_loss uses the following code to get the size of bounding boxes:

size_class = torch.argmax(end_points['size_scores'], -1)
...
size_base = torch.index_select(mean_size_arr, 0, size_class.view(-1))
...
size = size_base + size_residual

And the consistency loss is calculated using MSE. Since torch.argmax() is non-differentiable, this loss seems only to affect the prediction of size residual and has no direct influence on the prediction of the size class. From my point of view, the size consistency loss should use KL-divergence as an additional term to minimize the difference of size scores generated by the teacher and student (like the class consistency loss). But your code doesn't do it and still has great performance.

Is it intended behavior? Are there any intuitions behind it?

@Na-Z
Copy link
Owner

Na-Z commented Apr 25, 2021

Hi, thanks for your interest on our work.

In our implementation, each class only has one size template. In other words, the 'size_class_label' and 'sem_cls_label' (i.e., ground truths of size class and semantic class) for one object are the same; the predictions of size class and semantic class should be similar. Hence, the 'size_residual' has more influence in the size consistency loss computation.

I think it is helpful to add an additional term to minimize the difference of size scores between two networks, if each class has multiple size templates. If you are interested to try out that, please let me know the results. :)

@lilanxiao
Copy link
Author

Yeah, that makes sense. Thank you for your explanation!

I'm going to close this issue. If would get interesting results, I'm glad to share them here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants