Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

H3 / LongConvKernel - l_max=None isn't working #2

Closed
grazder opened this issue Mar 9, 2023 · 3 comments
Closed

H3 / LongConvKernel - l_max=None isn't working #2

grazder opened this issue Mar 9, 2023 · 3 comments

Comments

@grazder
Copy link

grazder commented Mar 9, 2023

In H3 model here noted that

l_max: the maximum kernel length, also denoted by L. Set l_max=None to always use a global kernel

But it isn't working, because of

return torch.randn(self.channels, self.H, self.L) * 0.002

Which is leading to torch.randn(int, int, None) error:

TypeError: randn(): argument 'size' must be tuple of ints, but found element of type NoneType at pos 3

So, are global kernels supported now? Or, to make it more global, should I use large l_max value?

@DanFu09
Copy link
Contributor

DanFu09 commented Mar 9, 2023 via email

@grazder
Copy link
Author

grazder commented Mar 9, 2023

ok thank!
it would be clearer if there was an assert or something like that.
this would be helpful for users like me who use just the model implementation without any other framework things

@DanFu09
Copy link
Contributor

DanFu09 commented Mar 9, 2023 via email

@DanFu09 DanFu09 closed this as completed Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants