You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# ResBlock
modules_body.append(conv(n_feat, n_feat, kernel_size, bias=bias))
modules_body.append(act)
modules_body.append(conv(n_feat, n_feat, kernel_size, bias=bias))
###
self.CA = CALayer(n_feat, reduction, bias=bias)
self.body = nn.Sequential(*modules_body)
def forward(self, x):
res = self.body(x)
res = self.CA(res)
res += x
return res`
In the paper, for Table 1, it is stated that ResBlock is replaced by "Res FFT-Conv Block".
I wonder how do you integrate the "Res FFT-Conv Block" into CAB in your experiments?
Do you just replace the ResBlock structure by "Res FFT-Conv Block" (please see my comments in the code) or you replace the whole CAB by "Res FFT-Conv Block"?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
In MPRNet, the Channel Attention Block (CAB) has the following structure:
`class CAB(nn.Module):
def init(self, n_feat, kernel_size, reduction, bias, act):
super(CAB, self).init()
modules_body = []
In the paper, for Table 1, it is stated that ResBlock is replaced by "Res FFT-Conv Block".
I wonder how do you integrate the "Res FFT-Conv Block" into CAB in your experiments?
Do you just replace the ResBlock structure by "Res FFT-Conv Block" (please see my comments in the code) or you replace the whole CAB by "Res FFT-Conv Block"?
Thanks in advance.
The text was updated successfully, but these errors were encountered: