-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI] Operator overloading issue when dealing with zero-rank tensor #3240
Comments
@kevinthesun please followup to propose a fix for this. |
ping @kevinthesun |
Is it possible we force binary op to be the correct implementation inside tvm.compute? |
Because it is not context-dependent, it is harder to do force that. We could, however, avoid force TensorSlice mul scalar to always get an Expr(which should fix your case) |
@kevinthesun what is the status on this? |
@tqchen Did you mean force mul-scalar to always get Expr? |
it depends on the type. If it is TensorSlice that has 0-rank, we could always return Expr. |
After i think a bit more about it. I think the problem was that we do need to represent the 0-rank tensor as TensorSlice, so in the above case, we should instead write(note the call in the scale) import tvm
from tvm import relay
n = 10
A = tvm.placeholder((n, ), name='A')
scale = tvm.placeholder((), name='scale')
k = tvm.reduce_axis((0, n), name="k")
fcompute = lambda : tvm.sum(A[k] * scale(), axis=k)
C = tvm.compute((), fcompute, name="C") |
#3612 makes the test cases more conservative. So that the topi behavior can remains the same. |
This PR #1029 overloads binary op for tensor to use topi broadcast op when importing topi. This will cause topi.compute to fail when zero-rank tensor appears in the fcompute body, since a topi broadcast op returns a tensor while comm_reducer requires an expr:
Error msg:
@tqchen @jroesch @yzhliu
The text was updated successfully, but these errors were encountered: