Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you provide .so compiling setting? #17

Closed
laisimiao opened this issue Jun 8, 2022 · 5 comments
Closed

Can you provide .so compiling setting? #17

laisimiao opened this issue Jun 8, 2022 · 5 comments

Comments

@laisimiao
Copy link

Like many projects, they would provide torch cuda extensions and a setup.py to compile and generate .so file so that they can be loaded and used. Could you provide such method to use as_shift op?

@niujinshuchong
Copy link
Collaborator

@laisimiao The shift op will be compiled on the fly and save to your system cache folder. So you can directly run without setup.py

@laisimiao
Copy link
Author

@niujinshuchong Thanks. Recently I found a weird question:

x = torch.arange(5)
y = torch.arange(5)
mesh = torch.meshgrid([x, y])
xy = torch.stack([mesh[0], mesh[1]], dim=0)
a = torch.cat([xy, xy, xy], dim=0)[:5,:,:].unsqueeze(0).cuda()
print(a.shape)  # B,C,H,W

class AxialShift(nn.Module):
    def __init__(self, dim, shift_size):
        super().__init__()
        self.shift = Shift(shift_size, dim)

    def forward(self, x):
        return self.shift(x)

shift = AxialShift(2, 5).cuda()
y = shift(a)
print(y.shape)

it will give errors like this:

cupy.cuda.compiler.CompileException: /tmp/tmpigcfzd1m/617aebd77e7545b257dbbd01681f6aa29014226e.cubin.cu(6): error: identifier "None" is undefined

/tmp/tmpigcfzd1m/617aebd77e7545b257dbbd01681f6aa29014226e.cubin.cu(6): error: identifier "None" is undefined

/tmp/tmpigcfzd1m/617aebd77e7545b257dbbd01681f6aa29014226e.cubin.cu(15): error: identifier "None" is undefined

3 errors detected in the compilation of "/tmp/tmpigcfzd1m/617aebd77e7545b257dbbd01681f6aa29014226e.cubin.cu".

But if I provide shift with x = torch.randn(1,5,5,5).cuda() not a in above example, it's no error. Why would ths happen?

@niujinshuchong
Copy link
Collaborator

@laisimiao That's wired. Could you maybe try a = a.contiguous() ?

@laisimiao
Copy link
Author

Oh I found the reason: it will give errors when input data type is int but fine with float data type.

@niujinshuchong
Copy link
Collaborator

Great.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants