Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #3

Merged
merged 1 commit into from
Jun 6, 2022
Merged

fix #3

merged 1 commit into from
Jun 6, 2022

Conversation

spectrometerHBH
Copy link

No description provided.

@vinx13 vinx13 merged commit 78c316a into vinx13:auto-tensorization Jun 6, 2022
vinx13 pushed a commit that referenced this pull request Mar 27, 2023
* Initial importer and testing scaffolding.

* Implement matmul operator and tests.

* Add a bunch of new operators.

* Add new ops

* [Relax][Onnx] Implement Div, Sigmoid, Softmax, Transpose and Unsqueeze ops

* skip test_reshape

* [Relax][ONNX] Implement BiasGelu and Gelu ops

* [Relax][ONNX] Implement Where op

* [Relax][ONNX] Add Multiple ONNX Frontend Support for Clip / Equal / Shape / Not / Tanh (#3)

* Rebase w/ Equal, Not, Tanh, Sqrt, Relu, Clip, Conv, Pow, Erf.

* Fix cumsum but still needs work.

* Fix initializer for CumSum. (#9)

* Add Constant, Squeeze & Sub (#10)

* Add squeeze.

* Add Constant.

* Add sub.

* Support reusing Relay ONNX operator convertors in the Relax ONNX frontend (#8)

* [WIP] Support using Relay ops in the Relax ONNX frontend

Co-authored-by: Matthew Barrett  <mbarrett@octoml.ai>
Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai>

* [WIP] small fixes

Co-authored-by: Matthew Barrett  <mbarrett@octoml.ai>
Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai>

* [WIP] Support dynamic matmul and reshape

Co-authored-by: Matthew Barrett  <mbarrett@octoml.ai>
Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai>

* Address PR comments

---------

Co-authored-by: Matthew Barrett  <mbarrett@octoml.ai>
Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai>

* Add more ops (including all Reduce ops) using the relay frontend (apache#11)

* [WIP] add more ops. Some fail at the moment

* skip some tests

* Remove duplicate tests for squeeze

* Add Split op in the Relax ONNX frontend (apache#12)

* [Relax][ONNX] Add Split op

* Remove tmp

* Fix layer normalizations and Shape operator.

* Replace main loop with tvm testing.

* Simplify Slice for opset 13.

* [Relax][ONNX] Implement pad op

* Incorporate pad op, add static constantofshape op.

* Changes to shape to temporarily enable constantofshape in our models.

* Add initial tensor_to_shape implementation.

* Implemented dynamic broadcast_to to support expand and constantofshape.

* Changes sufficient for vortex end to end run.

* Formatting.

* Format tests.

* Re-add broadcast_to shape checking.

* Fix formatting.

* Remove overly strict manipulate check.

* Fix typing

* [Relax][Onnx] Implement Tile operator

* Switch to native relax attention importer.

* Address some of the PR comments

* Check for the imported model IR version

* switch from torch to numpy due to some incompatibility

* Fix make format.

* Clean up typing issues.

* Clarify variable name.

* Remove unneeded comprehension.

* Remove circular dependency.

* Add name sanitization for inputs

* Disable reshape rewrite pass until fixed.

* Fix long comment

* Update cpu image.

---------

Co-authored-by: Florin Blanaru <fblanaru@octoml.ai>
Co-authored-by: Xiyou Zhou <xiyou@octoml.ai>
Co-authored-by: Matthew Barrett  <mbarrett@octoml.ai>
Co-authored-by: Michalis Papadimitriou <mpapadimitriou@octoml.ai>
Co-authored-by: Florin Blanaru <florin.blanaru96@gmail.com>
Co-authored-by: sung <sunggg@umich.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants