Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

octave conv cannot support input at any size #7

Closed
Stinky-Tofu opened this issue Aug 12, 2019 · 3 comments
Closed

octave conv cannot support input at any size #7

Stinky-Tofu opened this issue Aug 12, 2019 · 3 comments

Comments

@Stinky-Tofu
Copy link

If the network is downsampled n times, the input size must be 2^a (a >= n), other sizes are not supported, such as 600x1000.

@d-li14
Copy link
Owner

d-li14 commented Aug 12, 2019

Do you mean the tensor will not restore to its original size via upsampling if the side of the tensor is odd? If so, manually padding one pixel may be one possible solution. Hopefully, it will not have much negative influence.

@Stinky-Tofu
Copy link
Author

Stinky-Tofu commented Aug 12, 2019

@d-li14 是的,我写了一点代码解决了

class padToSameSize(nn.Module):
    def __init__(self):
        super(padToSameSize, self).__init__()

    def forward(self, lTensor, rTensor):
        hwOfLTensor = np.array(lTensor.size()[2:], dtype=int)
        hwOfRtensor = np.array(rTensor.size()[2:], dtype=int)
        maxHW = np.max([hwOfLTensor, hwOfRtensor], axis=0)
        padHWOfLTensor = maxHW - hwOfLTensor
        padHWOfRTensor = maxHW - hwOfRtensor
        lTensor = F.pad(lTensor, pad=[int(padHWOfLTensor[1]), 0, int(padHWOfLTensor[0]), 0])
        rTensor = F.pad(rTensor, pad=[int(padHWOfRTensor[1]), 0, int(padHWOfRTensor[0]), 0])
        return lTensor, rTensor

@d-li14
Copy link
Owner

d-li14 commented Aug 12, 2019

@Stinky-Tofu Good job, exactly an effective workaround here!

@d-li14 d-li14 closed this as completed Aug 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants