Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI] Operator overloading issue when dealing with zero-rank tensor #3240

Closed
kevinthesun opened this issue May 24, 2019 · 9 comments
Closed

Comments

@kevinthesun
Copy link
Contributor

This PR #1029 overloads binary op for tensor to use topi broadcast op when importing topi. This will cause topi.compute to fail when zero-rank tensor appears in the fcompute body, since a topi broadcast op returns a tensor while comm_reducer requires an expr:

import tvm
from tvm import relay

n = 10
A = tvm.placeholder((n, ), name='A')
scale = tvm.placeholder((), name='scale')
k = tvm.reduce_axis((0, n), name="k")
fcompute = lambda : tvm.sum(A[k] * scale, axis=k)
C = tvm.compute((), fcompute, name="C")

Error msg:

Traceback (most recent call last):
  File "test.py", line 9, in <module>
    C = tvm.compute((), fcompute, name="C")
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 309, in compute
    body = fcompute(*[v.var for v in dim_var])
  File "test.py", line 8, in <lambda>
    fcompute = lambda : tvm.sum(A[k] * scale, axis=k)
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 819, in reducer
    return _make_reduce(expr, axis, where)
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 795, in _make_reduce
    assert isinstance(expr, _expr.Expr)
AssertionError

@tqchen @jroesch @yzhliu

@tqchen
Copy link
Member

tqchen commented May 30, 2019

@kevinthesun please followup to propose a fix for this.

@tqchen
Copy link
Member

tqchen commented Jul 6, 2019

ping @kevinthesun

@kevinthesun
Copy link
Contributor Author

Is it possible we force binary op to be the correct implementation inside tvm.compute?

@tqchen
Copy link
Member

tqchen commented Jul 9, 2019

Because it is not context-dependent, it is harder to do force that. We could, however, avoid force TensorSlice mul scalar to always get an Expr(which should fix your case)

@MarisaKirisame
Copy link
Contributor

@kevinthesun what is the status on this?

@kevinthesun
Copy link
Contributor Author

@tqchen Did you mean force mul-scalar to always get Expr?

@tqchen
Copy link
Member

tqchen commented Jul 17, 2019

it depends on the type. If it is TensorSlice that has 0-rank, we could always return Expr.

@tqchen
Copy link
Member

tqchen commented Jul 24, 2019

After i think a bit more about it. I think the problem was that we do need to represent the 0-rank tensor as TensorSlice, so in the above case, we should instead write(note the call in the scale)

import tvm
from tvm import relay

n = 10
A = tvm.placeholder((n, ), name='A')
scale = tvm.placeholder((), name='scale')
k = tvm.reduce_axis((0, n), name="k")
fcompute = lambda : tvm.sum(A[k] * scale(), axis=k)
C = tvm.compute((), fcompute, name="C")

@tqchen
Copy link
Member

tqchen commented Jul 24, 2019

#3612 makes the test cases more conservative. So that the topi behavior can remains the same.

@tqchen tqchen closed this as completed Jul 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants