You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, it's impressive work and it's pleasure to read your code. And i have a question about the details in attention module... In the paper, the BEV feature is added by attentioned RV feature, but in the code below, the bev feature is added twice, one in ConvAttentionLayer_Decouple and one in self.residual_add. From my point of view, it should be only add once in second place. Is there any special design or is it experimentally proven to work better? Forward for your replay~
Hello, it's impressive work and it's pleasure to read your code. And i have a question about the details in attention module... In the paper, the BEV feature is added by attentioned RV feature, but in the code below, the bev feature is added twice, one in ConvAttentionLayer_Decouple and one in self.residual_add. From my point of view, it should be only add once in second place. Is there any special design or is it experimentally proven to work better? Forward for your replay~
temp_view = view + r
bev_output_feat_map = self.residual_add(bev_feat_map, bev_atten_output, bev_kernel_size).contiguous()
The text was updated successfully, but these errors were encountered: