You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for sharing this nice work!
I have a question about Lq loss. Why can query frame objective encapsulate constraints that mentioned in paper 3.5 by minimizing the Lq loss?
The text was updated successfully, but these errors were encountered:
Hi, in the query loss, through learning the kernel operator R_theta (from the SGD-based
minimization of the final network training loss, not during optimization), we basically learn the objective which we want to impose on the correspondence volume between the filter map w and the query feature map. Particularly, this loss enables to impose smoothness priors on the correlation output by for instance learning differential operator. This is similar to traditional unsupervised optical flow, which usually relies on a smoothness loss, according to which the gradient of the flow field should be minimized. Here, if R_theta learns differential operators, finding w which minimizes the query objective basically results in finding w for which the gradients of the resulting correspondence volume are minimized. Hope that helps !
Hi, thanks for sharing this nice work!
I have a question about Lq loss. Why can query frame objective encapsulate constraints that mentioned in paper 3.5 by minimizing the Lq loss?
The text was updated successfully, but these errors were encountered: