You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Brilliant work and thanks for the open source code.
I'm now reading the paper and code about the DBG model and I have a question about the PFG layer.
According to the paper and the code, the parameter w_l and w_r for the the PFG layer's output at location (t_s, t_e, n, c), which corresponding to the w, h, t, c in the code if I have understood correctly, can be directly calculated under the formula (1)(2)(3)(4) in the "Proposal feature generation layer" section. But both the paper and code shows that this layer is a trainable layer. So will the w_l and w_r be updated during the backward? Since it looks like a variable that don't need to be trained rather than a trainable parameter to me and I'm confused about this. Could you please explain this? Thank you very much!
The text was updated successfully, but these errors were encountered:
Thanks for your attention!
There are not trainable parameters in the PFG layer. You can treat the PFG layer is a bilinear sampling operation. The parameter w_l and w_r are computed as bilinear interpolation.
Thanks for your attention!
There are not trainable parameters in the PFG layer. You can treat the PFG layer is a bilinear sampling operation. The parameter w_l and w_r are computed as bilinear interpolation.
Thanks for your reply. I can get it! Thank you so much!
Brilliant work and thanks for the open source code.
I'm now reading the paper and code about the DBG model and I have a question about the PFG layer.
According to the paper and the code, the parameter w_l and w_r for the the PFG layer's output at location (t_s, t_e, n, c), which corresponding to the w, h, t, c in the code if I have understood correctly, can be directly calculated under the formula (1)(2)(3)(4) in the "Proposal feature generation layer" section. But both the paper and code shows that this layer is a trainable layer. So will the w_l and w_r be updated during the backward? Since it looks like a variable that don't need to be trained rather than a trainable parameter to me and I'm confused about this. Could you please explain this? Thank you very much!
The text was updated successfully, but these errors were encountered: