Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why flip twice? #14

Closed
lxy5513 opened this issue May 6, 2021 · 1 comment
Closed

why flip twice? #14

lxy5513 opened this issue May 6, 2021 · 1 comment

Comments

@lxy5513
Copy link

lxy5513 commented May 6, 2021

Thanks for you great work!
read the code, I find some code which I can't to understand, as below,
image
why it need to flip append loc and append conf?


by the way, we compare loc and flip loc in

consistency_loc_loss_x = torch.mean(torch.pow(loc_sampled[:, 0] + loc_sampled_flip[:, 0], exponent=2))
...

but, how we guarantee each item between loc_sampled and loc_sampled_flip can match correctly?

Hope your kindly response, much thanks.

@soo89
Copy link
Owner

soo89 commented May 10, 2021

Thank you for your interest.

f^(p,r,c,d)(cls)(I) should match with f^(p,r,C-c+1,d)(cls)(I).
'L160 : append_conf = flip(append_conf,2)' is for matching above mentioned.

In our paper, we adopted not only classification loss but also regression loss.

Therefore, we flipped loc outputs.

f^(p,r,c,d)(loc)(I) should match with f^(p,r,C-c+1,d)(loc)(I).
'L159 : append_loc = flip(append_loc,2)' is for matching above mentioned.

@soo89 soo89 closed this as completed May 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants