-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about test #35
Comments
See #9 for a discussion |
可是还是没有解决这个问题,我还是不理解如果input_resolution != x_size加的这个mask是什么 以及为什么要加这个mask SwinIR/models/network_swinir.py Line 262 in b28be97
|
verythanks |
This line is for training. We initialize the mask as SwinIR/models/network_swinir.py Line 259 in b28be97
This line is for testing. We calculate the mask for a given testing image. Note that the testing image is generally not 64x64. SwinIR/models/network_swinir.py Line 261 in b28be97
|
when test, what is the meaning of the mask? thankyou |
Similar to training, we pad the image after shifting it. You can try not using padding during testing and share the results with me. Thank you. |
Even I had the same issue. Can you please elaborate on why we require attn_mask?
|
Yes, attn_mask is only used for those transformers that operate on shifted window. Imagine that for a 64x64 input, after shifting 4x4 pixels towards top-left corner (by using |
Thank you for the response. This is an interesting point, from an implementation perspective. I am wondering why should transformer operate only on [56:60,:] and [:,56:60], but not on [0:4,:] and [:,0:4]? Either:
In the curent implementation, top 4 rows and last 4 columns are operated only 50% of times. |
I didn't test these two cases, but I guess the first case may lead to slightly worse performance (this part of data is discarded), while the second one may leads to slightly better performance (making full use of this part of data). The current implementation is just for simplicity and efficiency. |
Feel free to open it if you have more questions. |
Can you please explain why we need mask for testing, when input resolution is not 48x48? |
SwinIR needs mask for either input resolution of 48x48 or not. The difference is that we use pre-calculated mask for 48x48 images because we store it in the For the second concern, what is your error in testing? If you don't need mask in your own attention, there is no need for masking in testing as well. Sorry that I cannot give more help because I don't you what is your |
I am using the model: efficient attention(https://github.com/cmsflash/efficient-attention). |
It seems that the position encoding is not very important for SR from my experience. You can try to remove it and compare their results. Note that there are two problems you need to address for efficient attention(https://github.com/cmsflash/efficient-attention): 1) Using softmax for q and k separately may reduce the representation power of the attention matrix significantly as the rank is matrix is smaller. 2) It may be trick to apply masks for it (see cmsflash/efficient-attention#4) |
在swinIR模型中,有img_size这个参数,例如为128, 在SwinLayer时,是input_resolution=(128, 128), 比如我在测试的时候,我的输入图像不是(128, 128) 那么计算attention的时候 有一个判断, if self.input_resolution == x_size, else attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))。我想请问一下如果图片大小不等于self.input_resolution=(128,128)时, 加入的参数 mask 这个是什么mask
The text was updated successfully, but these errors were encountered: