Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use ONNX parser to create SequenceMap based models #277

Merged
merged 4 commits into from
Jun 5, 2023

Conversation

jantonguirao
Copy link
Contributor

Signed-off-by: Joaquin Anton janton@nvidia.com

Description

  • Changed Batch processing tutorial to use the more intuitive ONNX parser API to represent the model

Motivation and Context

  • The motivation is to encourage users to use ONNX parser

Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
Signed-off-by: Joaquin Anton <janton@nvidia.com>
@jantonguirao
Copy link
Contributor Author

@jcwchen This is pending review for a while. Can you please help me find the right person to review it? Thanks

Copy link
Member

@jcwchen jcwchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcwchen This is pending review for a while. Can you please help me find the right person to review it? Thanks

Thank you for the reminder! I will help. I was trying to run this notebook: regarding this line

onnxruntime.InferenceSession("tutorials/pad_to_largest.onnx")

I bumped into: Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from tutorials/pad_to_largest.onnx failed:Node () output arg (max_shape) type inference failed . Is it expected? I was using ORT 1.14.1 + ONNX 1.13.1.

@jantonguirao
Copy link
Contributor Author

@jcwchen This is pending review for a while. Can you please help me find the right person to review it? Thanks

Thank you for the reminder! I will help. I was trying to run this notebook: regarding this line

onnxruntime.InferenceSession("tutorials/pad_to_largest.onnx")

I bumped into: Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from tutorials/pad_to_largest.onnx failed:Node () output arg (max_shape) type inference failed . Is it expected? I was using ORT 1.14.1 + ONNX 1.13.1.

Right. I forgot to mention that. There was a bug, @gramalingam fixed it here: onnx/onnx#4880
I guess we can wait for the next ONNX release before merging.

@jcwchen
Copy link
Member

jcwchen commented Mar 9, 2023

Right. I forgot to mention that. There was a bug, @gramalingam fixed it here: onnx/onnx#4880
I guess we can wait for the next ONNX release before merging.

Thank you for the info. Although that fix has been included in the fresh ONNX 1.13.1 patch, I guess we still need to wait for ONNX Runtime consuming that ONNX commit. (ONNX Runtime 1.14.1 still uses ONNX 1.13.0 commit)

@jcwchen jcwchen merged commit 8303bb2 into onnx:main Jun 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants