Skip to content

Conversation

titaiwangms
Copy link
Contributor

@titaiwangms titaiwangms commented Nov 7, 2023

Fix https://github.com/microsoft/onnx-converters-private/issues/190

With pytorch/pytorch#112758, this PR moves built-in function ops mapping into torchlib.
NOTE: module name of operator.add is _operator

@titaiwangms titaiwangms added the module: torchlib Related to the torch/aten function lib in development label Nov 7, 2023
@codecov
Copy link

codecov bot commented Nov 7, 2023

Codecov Report

Merging #1135 (29e8bb5) into main (b0147d8) will decrease coverage by 0.02%.
The diff coverage is 80.00%.

@@            Coverage Diff             @@
##             main    #1135      +/-   ##
==========================================
- Coverage   78.40%   78.39%   -0.02%     
==========================================
  Files         118      118              
  Lines       15046    15054       +8     
  Branches     1604     1604              
==========================================
+ Hits        11797    11801       +4     
- Misses       2878     2882       +4     
  Partials      371      371              
Files Coverage Δ
onnxscript/function_libs/torch_lib/ops/core.py 79.73% <80.00%> (-0.08%) ⬇️

@justinchuby
Copy link
Collaborator

LGTM. Cc @BowenBao for namespace naming conventions

@justinchuby
Copy link
Collaborator

I am thinking something like python.operator and python.math may be more principled

@titaiwangms
Copy link
Contributor Author

I am thinking something like python.operator and python.math may be more principled

A different domain from DOMAIN = "pkg.onnxscript.torch_lib"? I don't know if we should mark the difference in the graph, as they are the same (eg: operator.add and aten::add) in ONNX perspective.

@titaiwangms titaiwangms requested a review from BowenBao November 7, 2023 17:44
@justinchuby
Copy link
Collaborator

justinchuby commented Nov 7, 2023

I meant python.operator::add or python.math::div to denote they are python built-in values and avoid potential name conflicts (although unlikely)

@titaiwangms
Copy link
Contributor Author

I meant python.operator::add or python.math::div to denote they are python built-in values and avoid potential name conflicts (although unlikely)

In that case, string processing would need to live somewhere in torchlib registration or converter registration to match _operator.add (original name).

@justinchuby
Copy link
Collaborator

SG. That's a good argument to keep the existing names

@titaiwangms titaiwangms merged commit 662af2a into microsoft:main Nov 7, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 8, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 11, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 11, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 12, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 12, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 14, 2023
…-onnx exporter"


Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~

[ghstack-poisoned]
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Nov 14, 2023
Fix microsoft/onnx-converters-private#190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~

[ghstack-poisoned]
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Nov 14, 2023
Fix https://github.com/microsoft/onnx-converters-private/issues/190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~
Pull Request resolved: #112758
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
)

Fix https://github.com/microsoft/onnx-converters-private/issues/190

This PR retires built-in function mapping by adding built-in ops into torchlib (microsoft/onnxscript#1135), and provide a runtime tests to guard the operation conversion.

More built-in ops are supported in torchlib as well.

~~NOTE: `native_batch_norm` regression is caused by microsoft/onnxscript#1140. Will fix it before I merge this.~~
Pull Request resolved: pytorch#112758
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: torchlib Related to the torch/aten function lib in development

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants