New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic output tensor shape #13
Comments
If I understand correctly, you need to relate the shape of output with the shapes of inputs.
You can use negative shapes other than For example, if one input tensor |
Hey thanks for the answer, this leads me to 2 questions:
|
I think you don't have to return it. Why do you need to return the
Yeah, that's a problem. I've been thinking about a nice API since you filed this issue. If we allow shape calculation only for output tensors and intermediate fields, I guess a minor change in the code is sufficient. But if we also allow shape calculation for input tensors, designing a nice API gets a lot complicated. |
I wrote a prototype in the branch I guess these arguments are sufficient and general enough to do any dimension calculation for output tensors. You can have additional attributes in a |
It's ok not to return it. The pipeline is as follow: I precompute some operation on the tensor Unfortunately I cannot test the new branch at the moment, I'll let you know if I manage to ! |
This is implemented in v0.9.0. |
Hi !
I'm writing a convolution-like operator using Stannum. It can be used throughout a neural network, meaning each layer may have a different input/output shape. When trying to register the output tensor, it leads to this error:
AssertionError: Dim = -1 is not allowed when registering output tensors but only registering input tensors
Does it means I have to template and recompile the kernel for each layer of the neural network ?
For reference, here is the whole kernel/tube construction:
The text was updated successfully, but these errors were encountered: