New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you provide .so compiling setting? #17
Comments
@laisimiao The shift op will be compiled on the fly and save to your system cache folder. So you can directly run without setup.py |
@niujinshuchong Thanks. Recently I found a weird question: x = torch.arange(5)
y = torch.arange(5)
mesh = torch.meshgrid([x, y])
xy = torch.stack([mesh[0], mesh[1]], dim=0)
a = torch.cat([xy, xy, xy], dim=0)[:5,:,:].unsqueeze(0).cuda()
print(a.shape) # B,C,H,W
class AxialShift(nn.Module):
def __init__(self, dim, shift_size):
super().__init__()
self.shift = Shift(shift_size, dim)
def forward(self, x):
return self.shift(x)
shift = AxialShift(2, 5).cuda()
y = shift(a)
print(y.shape) it will give errors like this:
But if I provide shift with |
@laisimiao That's wired. Could you maybe try a = a.contiguous() ? |
Oh I found the reason: it will give errors when input data type is |
Great. |
Like many projects, they would provide torch cuda extensions and a
setup.py
to compile and generate.so
file so that they can be loaded and used. Could you provide such method to use as_shift op?The text was updated successfully, but these errors were encountered: