Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TOPI]Add op argwhere #3994

Merged
merged 14 commits into from
Oct 1, 2019
Merged

[TOPI]Add op argwhere #3994

merged 14 commits into from
Oct 1, 2019

Conversation

wweic
Copy link
Contributor

@wweic wweic commented Sep 23, 2019

This is in preparation for TensorFlow operator Where when x and y are missing. And the semantics is exactly the same as numpy argwhere.

The output shape of argwhere is [dynamic, ndim of input tensor]. So I use shape function to return the output shape. In the compute definition, use symbolic var to replace any shape.

cc @icemelon9 @kevinthesun @Huyuwei @Laurawly @yongwww @zhiics @tqchen

@wweic wweic force-pushed the op-where branch 6 times, most recently from 4c6398b to a8f9d03 Compare September 23, 2019 20:23
@wweic wweic changed the title Add op argwhere [TOPIC]Add op argwhere Sep 23, 2019
@wweic wweic changed the title [TOPIC]Add op argwhere [TOPI]Add op argwhere Sep 24, 2019
python/tvm/relay/op/_transform.py Show resolved Hide resolved
python/tvm/relay/op/_transform.py Show resolved Hide resolved
topi/python/topi/argwhere.py Outdated Show resolved Hide resolved
src/relay/op/algorithm/argwhere.cc Outdated Show resolved Hide resolved
python/tvm/relay/op/algorithm.py Outdated Show resolved Hide resolved
@wweic
Copy link
Contributor Author

wweic commented Sep 26, 2019

@icemelon9 @Huyuwei @Laurawly @yongwww @zhiics @tqchen please take a look again. thanks!

topi/python/topi/argwhere.py Outdated Show resolved Hide resolved
topi/python/topi/generic/where.py Outdated Show resolved Hide resolved
Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few minor comments. Otherwise, LGTM

src/relay/op/tensor/transform.cc Outdated Show resolved Hide resolved
tests/python/relay/test_any.py Outdated Show resolved Hide resolved
Copy link
Contributor

@Laurawly Laurawly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wweic
Copy link
Contributor Author

wweic commented Sep 30, 2019

@icemelon9 @tqchen @zhiics CI is failing because of out of space. Could anyone of you help fix the CI? Thanks.

@tqchen
Copy link
Member

tqchen commented Oct 1, 2019

ci issue fixed, sorry for the problem

@wweic
Copy link
Contributor Author

wweic commented Oct 1, 2019

Thanks @tqchen ! @icemelon9 could you merge the PR if no further comments?

@icemelon icemelon merged commit fa4d3ec into apache:master Oct 1, 2019
@icemelon
Copy link
Member

icemelon commented Oct 1, 2019

Thanks @wweic

@EddieBurning
Copy link

@wweic hi, i am using TVM tensorflow frontend to convert a freezed tensorflow pb file into TVM relay IR graph, the model contains a argwhere Op you support here, but when i run the convert script, the conversion failed, the program traceback is as follows:

Traceback (most recent call last):

File "from_hfnet_to_tvm.py", line 118, in
outputs=output_nodes)

File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2393, in from_tensorflow
mod, params = g.from_tensorflow(graph, layout, shape, outputs)

File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2056, in from_tensorflow
out_shapes = [_infer_shape(node_item) for node_item in self._nodes[node.name]]

File "/home/admin/tvm/python/tvm/relay/frontend/tensorflow.py", line 2056, in
out_shapes = [_infer_shape(node_item) for node_item in self._nodes[node.name]]

File "/home/admin/tvm/python/tvm/relay/frontend/common.py", line 466, in infer_shape
out_shapes = get_const_tuple(out_type.checked_type.shape)

File "/home/admin/tvm/topi/python/topi/util.py", line 164, in get_const_tuple
return tuple(get_const_int(elem) for elem in in_tuple)

File "/home/admin/tvm/topi/python/topi/util.py", line 164, in
return tuple(get_const_int(elem) for elem in in_tuple)

File "/home/admin/tvm/topi/python/topi/util.py", line 101, in get_const_int
expr = tvm.ir_pass.Simplify(expr)

File "/home/admin/tvm/python/tvm/_ffi/_ctypes/function.py", line 210, in call
raise get_last_ffi_error()

tvm.ffi.base.TVMError: Traceback (most recent call last):
[bt] (6) /home/admin/tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7fdfa34cb8c1]
[bt] (5) /home/admin/tvm/build/libtvm.so(+0x44ae9c) [0x7fdfa2cc7e9c]
[bt] (4) /home/admin/tvm/build/libtvm.so(tvm::ir::Simplify(tvm::Expr, tvm::Map<tvm::Var, tvm::Range, void, void>)+0x21f) [0x7fdfa2d70faf]
[bt] (3) /home/admin/tvm/build/libtvm.so(tvm::arith::Analyzer::Simplify(tvm::Expr const&)+0x1e8) [0x7fdfa2dd4e58]
[bt] (2) /home/admin/tvm/build/libtvm.so(tvm::arith::RewriteSimplifier::operator()(tvm::Expr const&)+0xa9) [0x7fdfa2d73779]
[bt] (1) /home/admin/tvm/build/libtvm.so(tvm::IRFunctor<tvm::Expr (tvm::NodeRef const&, tvm::Expr const&, tvm::ir::IRMutator*)>::operator()(tvm::NodeRef const&, tvm::Expr const&, tvm::ir::IRMutator*) const+0x10a) [0x7fdfa2d292ba]
[bt] (0) /home/admin/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7fdfa2cca012]
File "/home/admin/tvm/include/tvm/node/ir_functor.h", line 91
TVMError: Check failed: type_index < func
.size() && func_[type_index] != nullptr: IRFunctor calls un-registered function on type Any

After inspecting the code, i find the reason is that _infer_shape needs to return a const tuple, while first dim of argwhere is dynamic, so it failed, any idea how to fix this?

@zhiics
Copy link
Member

zhiics commented Oct 15, 2019

@EddieBurning Can you please open a thread in https://discuss.tvm.ai/ and add the script to reproduce the error? Thanks.

@EddieBurning
Copy link

@zhiics OK

anijain2305 pushed a commit to anijain2305/tvm that referenced this pull request Oct 17, 2019
* Add op argwhere

* Move shape func to _algorithm.py

* Add lint rule

* Raise exception if rank is not supportted

* move argwhere to transform

* Add argwhere example

* Fix lint

* Add 1-d support

* cleanup

* Add more dtype support

* CR comment

* Improve error message

* Docs

* raise exception
wweic added a commit to neo-ai/tvm that referenced this pull request Oct 18, 2019
* Add op argwhere

* Move shape func to _algorithm.py

* Add lint rule

* Raise exception if rank is not supportted

* move argwhere to transform

* Add argwhere example

* Fix lint

* Add 1-d support

* cleanup

* Add more dtype support

* CR comment

* Improve error message

* Docs

* raise exception
petrex added a commit to petrex/tvm that referenced this pull request Oct 29, 2019
* master:
  Fix split's last factor issue (apache#4044)
  [COMMUNITY] ajtulloch -> committer (apache#4043)
  [TOPI]Add op argwhere (apache#3994)
  [topi] add ARM v8.2 udot (uint8) support (apache#3978)
  [COMMUNITY] anijain2305 -> reviewer (apache#4036)
  [QNN] Renaming dense operator. (apache#4033)
  [Relay][Compile_engine] Int64 shape handling for outputs. (apache#4031)
  Add dmlc-core to the list of installed header directories. (apache#4035)
  [ARITH] migrate indexdiv/mod to floordiv/mod (apache#4008)
  [Relay] Move prelude to text format (apache#3939)
  make tvm compilable by gcc 4.9.2 (apache#4032)
  [AUTOTVM][DOCS] Add a link to the defining network description of auto-tuning tutorial (apache#4023)
  [ARITH] cleanup the indexmod/div on python side (apache#4028)
  [Fix] Add more pad_mode support for onnx converter (apache#4029)
  Add parser support for ReLU tflite operator (apache#4022)
  Additional MXNet Convolution and Deconvolution tests (apache#4026)
  docs: minor spelling tweaks (apache#4027)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants