Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the QP module back-propagate gradient? #13

Closed
smallbox120 opened this issue Aug 14, 2020 · 1 comment
Closed

Does the QP module back-propagate gradient? #13

smallbox120 opened this issue Aug 14, 2020 · 1 comment

Comments

@smallbox120
Copy link

smallbox120 commented Aug 14, 2020

In file Network.py line 110:

       for i in range(num_query):
                 for j in range(num_proto):



                _, flow = emd_inference_opencv(1 - similarity_map[i, j, :, :], weight_1[i, j, :], weight_2[j, i, :])

                similarity_map[i, j, :, :] =(similarity_map[i, j, :, :])*torch.from_numpy(flow).cuda()

        temperature=(self.args.temperature/num_node)

        logitis = similarity_map.sum(-1).sum(-1) *  temperature

The calculation of the flow, i.e., the best match between features, is conducted when forward logits and losses, but does the back-propagation go through the 'emd_inference_opencv' module? If I understand correctly, this module is run on cpu() and numpy, therefore no gradient is tracked.

@icoz69
Copy link
Owner

icoz69 commented Aug 14, 2020

correct. when you are using opencv, you omit the gradients through the solver. we have tips in the readme file.

@icoz69 icoz69 closed this as completed Aug 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants