New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confusion about the Attention Map Generation of distribute branch #5
Comments
Hi @KamInNg @qiulesun You can also refer to released codes: |
Hi, @EthanZhangYi , thanks for your advice, but I am still confused about that the over-completed H which channels' number is (2H-1)*(2W-1) stands for the realitive positon information fo i and j. Can you help me, best regards. |
Hi, @EthanZhangYi , I'm almost understand this mechanism, but the number of Δji directly equal to (2H-1)*(2W-1) still bother me, while it is not much clear in the ECCV paper. Can you provide a more clear explanation? I will be much grateful for your help. Best regards! |
In order to cover the whole image,we take all possible relative position in the feature map into account. Given each pair of points in the map, we can calculate their relative position. Then for all pair of points, the relative height ranges from 1-H to H-1 (2H-1 possible values), and the relative width ranges from 1-W to W-1 (2W-1 possible values). The total number of possible relative positions is (2H-1)*(2W-1) |
@EthanZhangYi Get it! Thank you very much! |
Hi,@hszhao
It seems that it is same between the collect branch and distributes branch(show on Fig. 3).
Could you show the equation of the distribute branch(like the equation(eq.9) of the collect branch show on the paper)?
The text was updated successfully, but these errors were encountered: