-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
octave conv cannot support input at any size #7
Comments
Do you mean the tensor will not restore to its original size via upsampling if the side of the tensor is odd? If so, manually padding one pixel may be one possible solution. Hopefully, it will not have much negative influence. |
@d-li14 是的,我写了一点代码解决了 class padToSameSize(nn.Module):
def __init__(self):
super(padToSameSize, self).__init__()
def forward(self, lTensor, rTensor):
hwOfLTensor = np.array(lTensor.size()[2:], dtype=int)
hwOfRtensor = np.array(rTensor.size()[2:], dtype=int)
maxHW = np.max([hwOfLTensor, hwOfRtensor], axis=0)
padHWOfLTensor = maxHW - hwOfLTensor
padHWOfRTensor = maxHW - hwOfRtensor
lTensor = F.pad(lTensor, pad=[int(padHWOfLTensor[1]), 0, int(padHWOfLTensor[0]), 0])
rTensor = F.pad(rTensor, pad=[int(padHWOfRTensor[1]), 0, int(padHWOfRTensor[0]), 0])
return lTensor, rTensor |
@Stinky-Tofu Good job, exactly an effective workaround here! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If the network is downsampled n times, the input size must be 2^a (a >= n), other sizes are not supported, such as 600x1000.
The text was updated successfully, but these errors were encountered: