-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding optional ops in contrib ops #7946
Adding optional ops in contrib ops #7946
Conversation
f6e9f4d
to
2ae7d0a
Compare
This PR is currently pending updates to Sparse Tensor support, as well as the Optional Type PR for ONNX: onnx/onnx#3508 |
@askhade @gramalingam @hariharans29 |
if (numOutputs != 1) { | ||
fail_type_inference("OptionalEmpty is expected to have 1 output."); | ||
} | ||
ctx.getOutputType(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: May be we should validate that the type is one of the two allowed by the TypeConstraint above. (In the long run, we should eliminate the type-constraint, but for now we are forced to have a TypeConstraint.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am planning to add all the types in the standard spec. Kept float only here for simplicity of the POC. I will extend it.
"O", | ||
{"optional(tensor(float))", | ||
"optional(seq(tensor(float)))"}, | ||
"Constrains output type to all optional tensor or optional sequence types.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not really all optional tensors right? We are restricting it to tensor of type float and optional sequence of tensor of type float... tensor of type double wont work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am planning to add all the types in the standard spec. Kept float only here for simplicity of the POC. Will extend it.
{"optional(tensor(float))", "optional(tensor(uint8))", "optional(tensor(uint16))", | ||
"optional(tensor(uint32))", "optional(tensor(uint64))", "optional(tensor(int8))", | ||
"optional(tensor(int16))", "optional(tensor(int32))", "optional(tensor(int64))", | ||
"optional(tensor(float16))", "optional(tensor(float))", "optional(tensor(double))", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already had optional(tensor(float)) at the beginning. Does this line should be removed or float -> bfloat16?
"optional(seq(tensor(float)))", "optional(seq(tensor(uint8)))", "optional(seq(tensor(uint16)))", | ||
"optional(seq(tensor(uint32)))", "optional(seq(tensor(uint64)))", "optional(seq(tensor(int8)))", | ||
"optional(seq(tensor(int16)))", "optional(seq(tensor(int32)))", "optional(seq(tensor(int64)))", | ||
"optional(seq(tensor(float16)))", "optional(seq(tensor(float)))", "optional(seq(tensor(double)))", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
…ime into neraoof/optionalType
…ime into neraoof/optionalType
94e4dcd
to
5e6f43d
Compare
.SetDomain(kMSDomain) | ||
.SinceVersion(1) | ||
.SetDoc(OptionalHasElement_ver1_doc) | ||
.Input(0, "input", "The optional input.", "O") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Will it be prudent to think about making this operator more generic in the number of input operands it supports ? Can it support taking in 'N' inputs (the first one is necessary and N-1 are optional) to produce the same number of output booleans as the number of inputs ? That way we can avoid inserting this operator for every optional output coming from a layer (possibly gaining runtime advantages of just using one op vs many ops) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I spent some time thinking about this idea. So far I did not come up with an example for this feature among pytorch models and use-cases we've had. Do you have a specific code logic in mind.
Usually in the models, if there is a list with multiple optional elements, the list has dynamic length and it does not help to export it to an operator with fixed ('N') number of inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be done for any unary operator, in principle. But we don't have it for any existing unary operator, which is suggestive that this is not common enough? I think unless there is a use-case, my preference is to keep it simple.
@@ -90,6 +90,10 @@ class NhwcInferenceContext : public InferenceContext { | |||
return nullptr; | |||
} | |||
|
|||
const SparseTensorProto* getInputSparseData(size_t) const override { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just checking: do we need all these changes in this PR to keep the CIs happy ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this a minor part of Dmitri's PR for sparse tensors.
@@ -350,6 +350,7 @@ common::Status NodeArg::UpdateTypeAndShape(const ONNX_NAMESPACE::TypeProto& inpu | |||
} break; | |||
case TypeProto::kSequenceType: | |||
case TypeProto::kMapType: | |||
case TypeProto::kOptionalType: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed for this PR.
…neraoof/optionalType
a9284ad
to
69485fa
Compare
LGTM - We can merge once others have had one final look as well. Thanks for this PR. |
@@ -89,8 +89,8 @@ def construct_model(self, batch, hidden_size, sequence_length, model_path): | |||
] | |||
|
|||
graph = helper.make_graph(nodes, graph_name, inputs, outputs, initializer=initializers) | |||
model = helper.make_model(graph) | |||
|
|||
model = helper.make_model(graph, opset_imports=[helper.make_opsetid("", 13)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nit: I think we might just be able to add ir_version=7
to the call to make_model.
* Added optional const spec
Following the design discussion we has to add Optional type to ONNX, we are planning to add the following ops for Optional type to ONNX:
1- Optional (Construct)
2- OptionalHasElement
3- OptionalGetElement