Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove hard ORT dependency, set default implementation to pure #95

Merged
merged 25 commits into from
Jun 30, 2022

Conversation

orausch
Copy link
Collaborator

@orausch orausch commented Jun 20, 2022

This PR makes the ORT dependency soft. If ORT is not installed, ORT node expansion will fail.
There is a new CI branch that runs tests without ORT.

Since our constant folding depends on ORT, we add more cleanup to the ONNX Importer. It now uses onnxsim to do some preliminary cleanup, doing a lot of the heavy lifting that our CF would do.

To keep tests from failing when ORT is missing, we also add some new pure op implementations. This includes improvements to the python_pure_op_implementation decorator to make these more compact. In a later PR, I will apply these improvements to other pure impls (I expect to be able to simplify at least 11 implementations).

Previously we relied heavily on our ORT-based constant folding.
With ORT removed, this no longer works. onnx-optimizer does a similar
optimization, but using the python ORT version.
We now use the onnxsim package for constant folding, which is helpful
when ORT is not installed (which means our CF doesn't work).

The parent_pt_model argument to the ONNX importer is also removed.
Rather than reading the parameter values from the pt models during onnx
import, the parameters are now expected as inputs to the model
This argument enables computing values based on the node and it's
connectors. This enables moving a lot of operators to use this
decorators (TODO in later PR)
Previously, pure impls removed unused inputs using
constant_folding.remove_node_and_computation.
This behavior was incorrect when the unused connector's src was used
elsewhere.

To fix this, remove_node_and_computation now has an optional 'connector'
argument that removes a connector. Further, python-based op
implementations now automatically remove input connectors that are
unused.
@codecov
Copy link

codecov bot commented Jun 27, 2022

Codecov Report

Merging #95 (a3488aa) into master (ddd1120) will increase coverage by 7.32%.
The diff coverage is 93.55%.

@@            Coverage Diff             @@
##           master      #95      +/-   ##
==========================================
+ Coverage   60.07%   67.39%   +7.32%     
==========================================
  Files          58       58              
  Lines        6870     6955      +85     
==========================================
+ Hits         4127     4687     +560     
+ Misses       2743     2268     -475     
Impacted Files Coverage Δ
...ml/onnx/op_implementations/pure_implementations.py 69.60% <85.71%> (+4.39%) ⬆️
daceml/onnx/environments/onnxruntime.py 87.63% <93.10%> (+18.88%) ⬆️
daceml/onnx/onnx_importer.py 92.28% <96.88%> (-2.79%) ⬇️
daceml/onnx/__init__.py 100.00% <100.00%> (ø)
daceml/onnx/nodes/node_codegen.py 89.27% <100.00%> (+17.21%) ⬆️
.../onnx/op_implementations/img_op_implementations.py 66.79% <100.00%> (-0.37%) ⬇️
daceml/onnx/op_implementations/utils.py 100.00% <100.00%> (+4.76%) ⬆️
daceml/onnx/schema.py 95.62% <100.00%> (ø)
daceml/onnx/shape_inference/shape_inference.py 100.00% <100.00%> (ø)
daceml/ort_api/raw_api_bindings.py 95.58% <100.00%> (ø)
... and 24 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ddd1120...a3488aa. Read the comment docs.

@orausch orausch requested a review from tbennun June 29, 2022 15:14
Copy link
Contributor

@tbennun tbennun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with some comments

return PureReshape.forward(node, state, sdfg)
@python_pure_op_implementation(compute=dict(
shape=lambda input, node:
[prod(input.shape[:node.axis]),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait till you see the next PR :)



@op_implementation(op="Shape", name="pure")
class PureShape(ONNXForward):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hopefully we can get rid of all of those :)


name = clean_onnx_name(unclean_name)
if unclean_name in self.inputs:
# remove the tensor from inputs since this is a consant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# remove the tensor from inputs since this is a consant
# remove the tensor from inputs since this is a constant

function will be the name of the op that is being replaced.

The compute parameter enables you to compute a variable given the node and
it's inputs/outputs. This variable will be namespaced when parsing the function.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
it's inputs/outputs. This variable will be namespaced when parsing the function.
its inputs/outputs. This variable will be namespaced when parsing the function.

tests/pytorch/test_efficientnet_block.py Show resolved Hide resolved
tests/transformation/test_pad_conv_fusion.py Show resolved Hide resolved
dead_dataflow_elimination.DeadDataflowElimination()
]).apply_pass(sdfg, {})

# remove dangling nodes, this can happen iwth non-transients
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# remove dangling nodes, this can happen iwth non-transients
# remove dangling nodes, this can happen with non-transients

state.remove_edge(e)

pass_pipeline.Pipeline([
dead_dataflow_elimination.DeadDataflowElimination()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interesting use. won't it remove much more than that single node's dependencies?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially, but in the NN sdfgs there's usually no dead code.
We can fix it later if we run into issues

@orausch orausch changed the title Make ORT dependency soft Remove hard ORT dependency, set default implementation to pure Jun 30, 2022
@orausch orausch merged commit bf73df5 into master Jun 30, 2022
@orausch orausch deleted the remove-ort branch June 30, 2022 11:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants