-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Nuphar] improvements in symbolic_shape_infer and model editor #1787
Conversation
symbolic shape inference fixes to support more sophiscated models remove useless code from model_quantizer
continue | ||
|
||
if in_n.op_type == 'TopK' and len(in_n.input) == 1: | ||
upgrade_topk_op(nf, in_n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be calling upgrade_op instead of upgrade_topk_op? #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'Cast' : self._infer_Cast, | ||
'CategoryMapper' : self._infer_CategoryMapper, | ||
'Compress' : self._infer_Compress, | ||
'Concat' : self._infer_Concat, | ||
'Conv' : self._infer_Conv, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we try to maintain alphabetic order or not. If we do, Conv might need to be moved below ConstantOfShape.
#Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
if self.verbose_ > 2: | ||
print('Inferencing subgraph of node {} with output({}...): {}'.format(node.name, node.output[0], node.op_type)) | ||
assert len(node.input) == len(subgraph.input) | ||
subgraph_implicit_input = set() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a comment calling out that the node inputs are not passed directly to the subgraph would avoid potential confusion about the slightly indirect relationship between these lengths. #Closed
subgraph_implicit_input = set() | ||
for sn in subgraph.node: | ||
subgraph_implicit_input.update([i for i in sn.input if i in self.known_vi_]) | ||
tmp_graph = helper.make_graph(list(subgraph.node), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A value in the subgraph can shadow an outer scope value. Not sure that this takes that into account as I think it's checking outer scope values first for matches. #ByDesign
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this also need to take into account outputs of nodes in the subgraph? Those will also be considered to shadow an outer scope value.
In reply to: 323093408 [](ancestors = 323093408)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
known_vi_ would override when output name in subgraph shadows implicit input. This behavior would be consistent to CPU provider when subgraph output shadows implicit input. ie.:
float aaa; // implicit input to subgraph
float bbb;
subgraph {
ccc = aaa + bbb; // using implicit input
bbb = Relu(ccc); // bbb in subgraph shadowed implicit input
}
In reply to: 323675066 [](ancestors = 323675066,323093408)
@@ -240,7 +274,34 @@ def _onnx_infer_single_node(self, node): | |||
vi.CopyFrom(self.tmp_mp_.graph.output[i_o]) | |||
self.known_vi_[vi.name] = vi | |||
|
|||
def _get_int_values(self, node): | |||
def _onnx_infer_subgraph(self, node, subgraph): | |||
if self.verbose_ > 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this will handle an If subgraph (input length to If is 1, but subgraph has 0 explicit inputs) so name may be misleading if someone tried to add If processing in the future. It may work if you remove the assert about the input length for the node matching the subgraph. #Closed
subgraph = get_attribute(node, 'body') | ||
num_scan_inputs = get_attribute(node, 'num_scan_inputs') | ||
scan_input_axes = get_attribute(node, 'scan_input_axes', [0]*num_scan_inputs) | ||
num_scan_states = len(node.input) - num_scan_inputs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Negative axis values are supported here and in the output axes. Also there are a lot more ops that explicitly support negative axis following this PR: onnx/onnx#2281 #Closed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Description: This change improves symbolic shape inference to support more models
And also fixes potential unaligned buffer crash in onnxruntime_perf_test
Motivation and Context