We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for you great work! read the code, I find some code which I can't to understand, as below, why it need to flip append loc and append conf?
by the way, we compare loc and flip loc in
consistency_loc_loss_x = torch.mean(torch.pow(loc_sampled[:, 0] + loc_sampled_flip[:, 0], exponent=2)) ...
but, how we guarantee each item between loc_sampled and loc_sampled_flip can match correctly?
Hope your kindly response, much thanks.
The text was updated successfully, but these errors were encountered:
Thank you for your interest.
f^(p,r,c,d)(cls)(I) should match with f^(p,r,C-c+1,d)(cls)(I). 'L160 : append_conf = flip(append_conf,2)' is for matching above mentioned.
In our paper, we adopted not only classification loss but also regression loss.
Therefore, we flipped loc outputs.
f^(p,r,c,d)(loc)(I) should match with f^(p,r,C-c+1,d)(loc)(I). 'L159 : append_loc = flip(append_loc,2)' is for matching above mentioned.
Sorry, something went wrong.
No branches or pull requests
Thanks for you great work!
read the code, I find some code which I can't to understand, as below,
why it need to flip append loc and append conf?
by the way, we compare loc and flip loc in
but, how we guarantee each item between loc_sampled and loc_sampled_flip can match correctly?
Hope your kindly response, much thanks.
The text was updated successfully, but these errors were encountered: