-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOPI]Add op argwhere #3994
[TOPI]Add op argwhere #3994
Conversation
4c6398b
to
a8f9d03
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few minor comments. Otherwise, LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
ci issue fixed, sorry for the problem |
Thanks @tqchen ! @icemelon9 could you merge the PR if no further comments? |
Thanks @wweic |
@wweic hi, i am using TVM tensorflow frontend to convert a freezed tensorflow pb file into TVM relay IR graph, the model contains a argwhere Op you support here, but when i run the convert script, the conversion failed, the program traceback is as follows: Traceback (most recent call last): File "from_hfnet_to_tvm.py", line 118, in File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2393, in from_tensorflow File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2056, in from_tensorflow File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2056, in File "/home/admin/tvm/python/tvm/relay/frontend/common.py", line 466, in infer_shape File "/home/admin/tvm/topi/python/topi/util.py", line 164, in get_const_tuple File "/home/admin/tvm/topi/python/topi/util.py", line 164, in File "/home/admin/tvm/topi/python/topi/util.py", line 101, in get_const_int File "/home/admin/tvm/python/tvm/_ffi/_ctypes/function.py", line 210, in call tvm.ffi.base.TVMError: Traceback (most recent call last): After inspecting the code, i find the reason is that _infer_shape needs to return a const tuple, while first dim of argwhere is dynamic, so it failed, any idea how to fix this? |
@EddieBurning Can you please open a thread in https://discuss.tvm.ai/ and add the script to reproduce the error? Thanks. |
@zhiics OK |
* Add op argwhere * Move shape func to _algorithm.py * Add lint rule * Raise exception if rank is not supportted * move argwhere to transform * Add argwhere example * Fix lint * Add 1-d support * cleanup * Add more dtype support * CR comment * Improve error message * Docs * raise exception
* Add op argwhere * Move shape func to _algorithm.py * Add lint rule * Raise exception if rank is not supportted * move argwhere to transform * Add argwhere example * Fix lint * Add 1-d support * cleanup * Add more dtype support * CR comment * Improve error message * Docs * raise exception
* master: Fix split's last factor issue (apache#4044) [COMMUNITY] ajtulloch -> committer (apache#4043) [TOPI]Add op argwhere (apache#3994) [topi] add ARM v8.2 udot (uint8) support (apache#3978) [COMMUNITY] anijain2305 -> reviewer (apache#4036) [QNN] Renaming dense operator. (apache#4033) [Relay][Compile_engine] Int64 shape handling for outputs. (apache#4031) Add dmlc-core to the list of installed header directories. (apache#4035) [ARITH] migrate indexdiv/mod to floordiv/mod (apache#4008) [Relay] Move prelude to text format (apache#3939) make tvm compilable by gcc 4.9.2 (apache#4032) [AUTOTVM][DOCS] Add a link to the defining network description of auto-tuning tutorial (apache#4023) [ARITH] cleanup the indexmod/div on python side (apache#4028) [Fix] Add more pad_mode support for onnx converter (apache#4029) Add parser support for ReLU tflite operator (apache#4022) Additional MXNet Convolution and Deconvolution tests (apache#4026) docs: minor spelling tweaks (apache#4027)
This is in preparation for TensorFlow operator
Where
whenx
andy
are missing. And the semantics is exactly the same as numpy argwhere.The output shape of
argwhere
is[dynamic, ndim of input tensor]
. So I use shape function to return the output shape. In the compute definition, use symbolic var to replace any shape.cc @icemelon9 @kevinthesun @Huyuwei @Laurawly @yongwww @zhiics @tqchen