Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

有关self.dct_h和self.dct_w的设置? #10

Closed
XFR1998 opened this issue Mar 15, 2021 · 6 comments
Closed

有关self.dct_h和self.dct_w的设置? #10

XFR1998 opened this issue Mar 15, 2021 · 6 comments

Comments

@XFR1998
Copy link

XFR1998 commented Mar 15, 2021

在这个类中MultiSpectralAttentionLayer有以下部分。
if h != self.dct_h or w != self.dct_w:
x_pooled = torch.nn.functional.adaptive_avg_pool2d(x, (self.dct_h, self.dct_w))
# If you have concerns about one-line-change, don't worry. :)
# In the ImageNet models, this line will never be triggered.
# This is for compatibility in instance segmentation and object detection.

如果我的任务是目标检测,我该怎么设置self.dct_h和self.dct_w?

@cfzd
Copy link
Owner

cfzd commented Mar 16, 2021

@XFR1998
这个不需要设置,直接使用默认设置就好了。我们的默认设置是,以resnet为例,它的四个stage的dct_h分别为56,28,14,7,dct_w也是一样的. 具体代码可以看这块:

c2wh = dict([(64,56), (128,28), (256,14) ,(512,7)])

self.att = MultiSpectralAttentionLayer(planes * 4, c2wh[planes], c2wh[planes], reduction=reduction, freq_sel_method = 'top16')

@XFR1998
Copy link
Author

XFR1998 commented Mar 16, 2021

那如果我换成别的网络不用resnet我应该怎么根据实际情况设置呢?

@XFR1998
Copy link
Author

XFR1998 commented Mar 16, 2021

我看你好像是根据通道数进行设置的对吗,应该怎么修改呢?谢谢

@cfzd
Copy link
Owner

cfzd commented Mar 16, 2021

那我觉得你就干脆全部设成7就行了。

@XFR1998
Copy link
Author

XFR1998 commented Mar 16, 2021

那我觉得你就干脆全部设成7就行了。

好的谢谢大佬指点~

@cfzd cfzd closed this as completed Mar 24, 2021
@Max-Well-Wang
Copy link

那我觉得你就干脆全部设成7就行了。

好的谢谢大佬指点~
我觉得需要重新设置c2wh = dict([(64,56), (128,28), (256,14) ,(512,7)]),根据通道及其对应的尺寸来改变。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants