Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you provide simple tutorial on how to run onnx_to_plugin for simple operator? #35

Closed
mhmmdjafarg opened this issue Apr 7, 2023 · 4 comments

Comments

@mhmmdjafarg
Copy link

Hi, thank you for your great work.
I just wonder how to run onnx_to_plugin on Tile operator. I know it is supported by TPAT 1.0.
I have tried
python3 onnx_to_plugin.py -i model/pfe_baseline32000.onnx -o model/pfe_baseline_tpat.onnx -t Tile
python3 onnx_to_plugin.py -i model/pfe_baseline32000.onnx -o model/pfe_baseline_tpat.onnx -n Tile_16 -dynamic=true -min=1 -max=256 -opt=128
But it returns

Couldn't find reusable plugin for node Tile_16

  7: tvm::relay::StorageAllocaBaseVisitor::DeviceAwareVisitExpr_(tvm::relay::FunctionNode const*)                                  [0/60]
  6: tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::RelayExpr const&)
  5: tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
  4: tvm::relay::transform::DeviceAwareExprVisitor::VisitExpr_(tvm::relay::CallNode const*)
  3: tvm::relay::StorageAllocator::DeviceAwareVisitExpr_(tvm::relay::CallNode const*)
  2: tvm::relay::StorageAllocaBaseVisitor::CreateToken(tvm::RelayExprNode const*, bool)
  1: tvm::relay::StorageAllocator::CreateTokenOnDevice(tvm::RelayExprNode const*, DLDeviceType, bool)
  0: tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)
  File "/workspace/TPAT/3rdparty/blazerml-tvm/src/relay/backend/graph_plan_memory.cc", line 408
TVMError:
---------------------------------------------------------------
An error occurred during the execution of TVM.
For more information, please see: https://tvm.apache.org/docs/errors.html
---------------------------------------------------------------
  Check failed: (pval != nullptr) is false: Cannot allocate memory symbolic tensor shape [?, ?, ?]

Thank you

@wenqf11
Copy link
Collaborator

wenqf11 commented Apr 10, 2023

@mhmmdjafarg you mean dynamic=false is okay but when dynamic=true, it fails? Could you provide your onnx model?

@mhmmdjafarg
Copy link
Author

mhmmdjafarg commented Apr 10, 2023

No, both are returns error. I just wonder what is the correct way to do it? Maybe like the one you did on Resize operator
Here's the onnx file
https://drive.google.com/file/d/1WTJe7GNknIqt6cOi9C-izY9D1bKqvDjk/view?usp=sharing

@wenqf11
Copy link
Collaborator

wenqf11 commented Apr 10, 2023

@mhmmdjafarg Currently, we only support Tile operator with specific shape, can not support symbolic shape. You should fix your Tile shape except the first dimension(batch_size dimension).

@mhmmdjafarg
Copy link
Author

mhmmdjafarg commented Jun 3, 2023

Thank you for your reply, now I understand. Would you happen to have any idea how to make it fixed? My input tensor should be with shape (32000,1,32) and the tile is 1D with value (1,20,1)

I found the solution using onnx.helper, adding the value_info on the graph. I'll get back to you later. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants