New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relevance embedding #42
Comments
It is ok to change the parameter position of lrsr_lv3_unfold and refsr_lv3_unfold. You can just instead permute lrsr_lv3_unfold in this line: Line 27 in 2836600
And then you can apply your code above to get the equal results. |
Due to my limited understanding, I do not clearly understand your points.
|
I mean the key point is to get "R_lv3_star_arg" and "R_lv3_star" in Line 33 in 2836600
Therefore, how to permute the tensor is not important. But you need to adjust "dim" parameter to get the correct "R_lv3_star_arg" and "R_lv3_star" in this line: Line 33 in 2836600
|
I see, thanks for the detailed explanation !! |
TTSR/model/SearchTransfer.py
Lines 32 to 33 in 2836600
As I understood related to the equation 4 (in the main paper), your relevance matrix is calculating normalized inner product.
r_{i,j} = norm-inner{q_i,k_j}
. ( Query is from the up-sampled low-resolution image and Key is from the down/up-sampled reference image. )My understanding is like the below code.
Q1. Can you explain why your code is opposed?? (Usually, transformer makes the scores using the equation
scores = (Q, K^T))
.https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/132907dd272e2cc92e3c10e6c4e783a87ff8893d/transformer/Modules.py#L17
The text was updated successfully, but these errors were encountered: