Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding optional ops in contrib ops #7946

Merged
merged 26 commits into from
Jun 24, 2021

Conversation

neginraoof
Copy link
Contributor

@neginraoof neginraoof commented Jun 4, 2021

Following the design discussion we has to add Optional type to ONNX, we are planning to add the following ops for Optional type to ONNX:

1- Optional (Construct)
2- OptionalHasElement
3- OptionalGetElement

@neginraoof neginraoof requested a review from a team as a code owner June 4, 2021 07:05
@neginraoof neginraoof changed the title Adding optional operators Adding optional ops in contrib ops Jun 4, 2021
@neginraoof
Copy link
Contributor Author

This PR is currently pending updates to Sparse Tensor support, as well as the Optional Type PR for ONNX: onnx/onnx#3508

@neginraoof
Copy link
Contributor Author

@askhade @gramalingam @hariharans29
Please let me know if you have comments. Thanks!

if (numOutputs != 1) {
fail_type_inference("OptionalEmpty is expected to have 1 output.");
}
ctx.getOutputType(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: May be we should validate that the type is one of the two allowed by the TypeConstraint above. (In the long run, we should eliminate the type-constraint, but for now we are forced to have a TypeConstraint.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am planning to add all the types in the standard spec. Kept float only here for simplicity of the POC. I will extend it.

"O",
{"optional(tensor(float))",
"optional(seq(tensor(float)))"},
"Constrains output type to all optional tensor or optional sequence types.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not really all optional tensors right? We are restricting it to tensor of type float and optional sequence of tensor of type float... tensor of type double wont work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am planning to add all the types in the standard spec. Kept float only here for simplicity of the POC. Will extend it.

{"optional(tensor(float))", "optional(tensor(uint8))", "optional(tensor(uint16))",
"optional(tensor(uint32))", "optional(tensor(uint64))", "optional(tensor(int8))",
"optional(tensor(int16))", "optional(tensor(int32))", "optional(tensor(int64))",
"optional(tensor(float16))", "optional(tensor(float))", "optional(tensor(double))",
Copy link
Contributor

@hwangdeyu hwangdeyu Jun 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already had optional(tensor(float)) at the beginning. Does this line should be removed or float -> bfloat16?

"optional(seq(tensor(float)))", "optional(seq(tensor(uint8)))", "optional(seq(tensor(uint16)))",
"optional(seq(tensor(uint32)))", "optional(seq(tensor(uint64)))", "optional(seq(tensor(int8)))",
"optional(seq(tensor(int16)))", "optional(seq(tensor(int32)))", "optional(seq(tensor(int64)))",
"optional(seq(tensor(float16)))", "optional(seq(tensor(float)))", "optional(seq(tensor(double)))",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

.SetDomain(kMSDomain)
.SinceVersion(1)
.SetDoc(OptionalHasElement_ver1_doc)
.Input(0, "input", "The optional input.", "O")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Will it be prudent to think about making this operator more generic in the number of input operands it supports ? Can it support taking in 'N' inputs (the first one is necessary and N-1 are optional) to produce the same number of output booleans as the number of inputs ? That way we can avoid inserting this operator for every optional output coming from a layer (possibly gaining runtime advantages of just using one op vs many ops) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I spent some time thinking about this idea. So far I did not come up with an example for this feature among pytorch models and use-cases we've had. Do you have a specific code logic in mind.
Usually in the models, if there is a list with multiple optional elements, the list has dynamic length and it does not help to export it to an operator with fixed ('N') number of inputs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be done for any unary operator, in principle. But we don't have it for any existing unary operator, which is suggestive that this is not common enough? I think unless there is a use-case, my preference is to keep it simple.

@@ -90,6 +90,10 @@ class NhwcInferenceContext : public InferenceContext {
return nullptr;
}

const SparseTensorProto* getInputSparseData(size_t) const override {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just checking: do we need all these changes in this PR to keep the CIs happy ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this a minor part of Dmitri's PR for sparse tensors.

@@ -350,6 +350,7 @@ common::Status NodeArg::UpdateTypeAndShape(const ONNX_NAMESPACE::TypeProto& inpu
} break;
case TypeProto::kSequenceType:
case TypeProto::kMapType:
case TypeProto::kOptionalType:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed for this PR.

@hariharans29
Copy link
Member

LGTM - We can merge once others have had one final look as well. Thanks for this PR.

@@ -89,8 +89,8 @@ def construct_model(self, batch, hidden_size, sequence_length, model_path):
]

graph = helper.make_graph(nodes, graph_name, inputs, outputs, initializer=initializers)
model = helper.make_model(graph)

model = helper.make_model(graph, opset_imports=[helper.make_opsetid("", 13)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor nit: I think we might just be able to add ir_version=7 to the call to make_model.

@neginraoof neginraoof merged commit 80b7b13 into microsoft:master Jun 24, 2021
siweic0 pushed a commit to siweic0/onnxruntime-web that referenced this pull request May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants