Skip to content

Conversation

justinchuby
Copy link
Collaborator

@justinchuby justinchuby commented Sep 1, 2022

Stack from ghstack (oldest at bottom):

This is the 4th PR in the series of #83787. It enables the use of @onnx_symbolic across torch.onnx.

  • Backward breaking: Removed some symbolic functions from __all__ because of the use of @onnx_symbolic for registering the same function on multiple aten names.
  • Decorate all symbolic functions with @onnx_symbolic
  • Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for isfunction checking, speeding up the registration process by 60%.
    • Remove the outdated unit test test_symbolic_opset9.py
  • Symbolic function registration moved from the first call to _run_symbolic_function to init time.
  • Registration is fast:
    image

@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label Sep 1, 2022
@facebook-github-bot

This comment was marked as outdated.

@justinchuby justinchuby marked this pull request as draft September 1, 2022 23:59
@justinchuby justinchuby added module: onnx Related to torch.onnx triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module topic: new features topic category labels Sep 1, 2022
justinchuby added a commit that referenced this pull request Sep 2, 2022
ghstack-source-id: 475c90c
Pull Request resolved: #84448
justinchuby added a commit that referenced this pull request Sep 2, 2022
## Summary

The change brings the new registry for symbolic functions in ONNX. The `SymbolicRegistry` class in `torch.onnx._internal.registration` replaces the dictionary and various functions defined in `torch.onnx.symbolic_registry`.

The new registry

- Has faster lookup by storing only functions in the opset version they are defined in
- Is easier to manage and interact with due to its class design
- Builds the foundation for the more flexible registration process detailed in #83787

Implementation changes

- **Breaking**: Remove `torch.onnx.symbolic_registry`
- `register_custom_op_symbolic` and `unregister_custom_op_symbolic` in utils maintain their api for compatibility
- Update _onnx_supported_ops.py for doc generation to include quantized ops.
- Update code to register python ops in `torch/csrc/jit/passes/onnx.cpp`

## Profiling results

-0.1 seconds in execution time. -34% time spent in `_run_symbolic_function`. Tested on the alexnet example in public doc.

### After
```
   └─ 1.641 export  <beartype(torch.onnx.utils.export) at 0x7f19be17f790>:1
      └─ 1.641 export  torch/onnx/utils.py:185
         └─ 1.640 _export  torch/onnx/utils.py:1331
            ├─ 0.889 _model_to_graph  torch/onnx/utils.py:1005
            │  ├─ 0.478 _optimize_graph  torch/onnx/utils.py:535
            │  │  ├─ 0.214 PyCapsule._jit_pass_onnx_graph_shape_type_inference  <built-in>:0
            │  │  │     [2 frames hidden]  <built-in>
            │  │  ├─ 0.190 _run_symbolic_function  torch/onnx/utils.py:1670
            │  │  │  └─ 0.145 Constant  torch/onnx/symbolic_opset9.py:5782
            │  │  │     └─ 0.139 _graph_op  torch/onnx/_patch_torch.py:18
            │  │  │        └─ 0.134 PyCapsule._jit_pass_onnx_node_shape_type_inference  <built-in>:0
            │  │  │              [2 frames hidden]  <built-in>
            │  │  └─ 0.033 [self]  
```

### Before
![image](https://user-images.githubusercontent.com/11205048/188032302-688d881e-860d-4046-bdba-90da54233576.png)

### Start up time

The startup process takes 0.03 seconds. Calls to `inspect` will be eliminated when we switch to using decorators for registration in #84448

![image](https://user-images.githubusercontent.com/11205048/188208910-250f0434-475d-4872-9abc-781535519305.png)


[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Sep 2, 2022
## Summary

The change brings the new registry for symbolic functions in ONNX. The `SymbolicRegistry` class in `torch.onnx._internal.registration` replaces the dictionary and various functions defined in `torch.onnx.symbolic_registry`.

The new registry

- Has faster lookup by storing only functions in the opset version they are defined in
- Is easier to manage and interact with due to its class design
- Builds the foundation for the more flexible registration process detailed in #83787

Implementation changes

- **Breaking**: Remove `torch.onnx.symbolic_registry`
- `register_custom_op_symbolic` and `unregister_custom_op_symbolic` in utils maintain their api for compatibility
- Update _onnx_supported_ops.py for doc generation to include quantized ops.
- Update code to register python ops in `torch/csrc/jit/passes/onnx.cpp`

## Profiling results

-0.1 seconds in execution time. -34% time spent in `_run_symbolic_function`. Tested on the alexnet example in public doc.

### After
```
   └─ 1.641 export  <beartype(torch.onnx.utils.export) at 0x7f19be17f790>:1
      └─ 1.641 export  torch/onnx/utils.py:185
         └─ 1.640 _export  torch/onnx/utils.py:1331
            ├─ 0.889 _model_to_graph  torch/onnx/utils.py:1005
            │  ├─ 0.478 _optimize_graph  torch/onnx/utils.py:535
            │  │  ├─ 0.214 PyCapsule._jit_pass_onnx_graph_shape_type_inference  <built-in>:0
            │  │  │     [2 frames hidden]  <built-in>
            │  │  ├─ 0.190 _run_symbolic_function  torch/onnx/utils.py:1670
            │  │  │  └─ 0.145 Constant  torch/onnx/symbolic_opset9.py:5782
            │  │  │     └─ 0.139 _graph_op  torch/onnx/_patch_torch.py:18
            │  │  │        └─ 0.134 PyCapsule._jit_pass_onnx_node_shape_type_inference  <built-in>:0
            │  │  │              [2 frames hidden]  <built-in>
            │  │  └─ 0.033 [self]  
```

### Before
![image](https://user-images.githubusercontent.com/11205048/188032302-688d881e-860d-4046-bdba-90da54233576.png)

### Start up time

The startup process takes 0.03 seconds. Calls to `inspect` will be eliminated when we switch to using decorators for registration in #84448

![image](https://user-images.githubusercontent.com/11205048/188208910-250f0434-475d-4872-9abc-781535519305.png)


[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Sep 2, 2022
## Summary

The change brings the new registry for symbolic functions in ONNX. The `SymbolicRegistry` class in `torch.onnx._internal.registration` replaces the dictionary and various functions defined in `torch.onnx.symbolic_registry`.

The new registry

- Has faster lookup by storing only functions in the opset version they are defined in
- Is easier to manage and interact with due to its class design
- Builds the foundation for the more flexible registration process detailed in #83787

Implementation changes

- **Breaking**: Remove `torch.onnx.symbolic_registry`
- `register_custom_op_symbolic` and `unregister_custom_op_symbolic` in utils maintain their api for compatibility
- Update _onnx_supported_ops.py for doc generation to include quantized ops.
- Update code to register python ops in `torch/csrc/jit/passes/onnx.cpp`

## Profiling results

-0.1 seconds in execution time. -34% time spent in `_run_symbolic_function`. Tested on the alexnet example in public doc.

### After
```
   └─ 1.641 export  <beartype(torch.onnx.utils.export) at 0x7f19be17f790>:1
      └─ 1.641 export  torch/onnx/utils.py:185
         └─ 1.640 _export  torch/onnx/utils.py:1331
            ├─ 0.889 _model_to_graph  torch/onnx/utils.py:1005
            │  ├─ 0.478 _optimize_graph  torch/onnx/utils.py:535
            │  │  ├─ 0.214 PyCapsule._jit_pass_onnx_graph_shape_type_inference  <built-in>:0
            │  │  │     [2 frames hidden]  <built-in>
            │  │  ├─ 0.190 _run_symbolic_function  torch/onnx/utils.py:1670
            │  │  │  └─ 0.145 Constant  torch/onnx/symbolic_opset9.py:5782
            │  │  │     └─ 0.139 _graph_op  torch/onnx/_patch_torch.py:18
            │  │  │        └─ 0.134 PyCapsule._jit_pass_onnx_node_shape_type_inference  <built-in>:0
            │  │  │              [2 frames hidden]  <built-in>
            │  │  └─ 0.033 [self]  
```

### Before
![image](https://user-images.githubusercontent.com/11205048/188032302-688d881e-860d-4046-bdba-90da54233576.png)

### Start up time

The startup process takes 0.03 seconds. Calls to `inspect` will be eliminated when we switch to using decorators for registration in #84448

![image](https://user-images.githubusercontent.com/11205048/188208910-250f0434-475d-4872-9abc-781535519305.png)


[ghstack-poisoned]
justinchuby added a commit that referenced this pull request Sep 2, 2022
## Summary

The change brings the new registry for symbolic functions in ONNX. The `SymbolicRegistry` class in `torch.onnx._internal.registration` replaces the dictionary and various functions defined in `torch.onnx.symbolic_registry`.

The new registry

- Has faster lookup by storing only functions in the opset version they are defined in
- Is easier to manage and interact with due to its class design
- Builds the foundation for the more flexible registration process detailed in #83787

Implementation changes

- **Breaking**: Remove `torch.onnx.symbolic_registry`
- `register_custom_op_symbolic` and `unregister_custom_op_symbolic` in utils maintain their api for compatibility
- Update _onnx_supported_ops.py for doc generation to include quantized ops.
- Update code to register python ops in `torch/csrc/jit/passes/onnx.cpp`

## Profiling results

-0.1 seconds in execution time. -34% time spent in `_run_symbolic_function`. Tested on the alexnet example in public doc.

### After
```
   └─ 1.641 export  <beartype(torch.onnx.utils.export) at 0x7f19be17f790>:1
      └─ 1.641 export  torch/onnx/utils.py:185
         └─ 1.640 _export  torch/onnx/utils.py:1331
            ├─ 0.889 _model_to_graph  torch/onnx/utils.py:1005
            │  ├─ 0.478 _optimize_graph  torch/onnx/utils.py:535
            │  │  ├─ 0.214 PyCapsule._jit_pass_onnx_graph_shape_type_inference  <built-in>:0
            │  │  │     [2 frames hidden]  <built-in>
            │  │  ├─ 0.190 _run_symbolic_function  torch/onnx/utils.py:1670
            │  │  │  └─ 0.145 Constant  torch/onnx/symbolic_opset9.py:5782
            │  │  │     └─ 0.139 _graph_op  torch/onnx/_patch_torch.py:18
            │  │  │        └─ 0.134 PyCapsule._jit_pass_onnx_node_shape_type_inference  <built-in>:0
            │  │  │              [2 frames hidden]  <built-in>
            │  │  └─ 0.033 [self]  
```

### Before
![image](https://user-images.githubusercontent.com/11205048/188032302-688d881e-860d-4046-bdba-90da54233576.png)

### Start up time

The startup process takes 0.03 seconds. Calls to `inspect` will be eliminated when we switch to using decorators for registration in #84448

![image](https://user-images.githubusercontent.com/11205048/188208910-250f0434-475d-4872-9abc-781535519305.png)


[ghstack-poisoned]
justinchuby added a commit to justinchuby/pytorch that referenced this pull request Sep 20, 2022
ghstack-source-id: 35816aa
Pull Request resolved: pytorch#84448

Snapshot
This is the 4th PR in the series of #83787. It enables the use of `onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)



[ghstack-poisoned]
justinchuby added a commit to justinchuby/pytorch that referenced this pull request Sep 21, 2022
ghstack-source-id: 776c4a8
Pull Request resolved: pytorch#84448

Snapshot
Copy link
Collaborator

@BowenBao BowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, a follow up comment on _export. I'd suggest adding _export to ensure __all__ is unchanged. It is safer to avoid any potential backward breakage, until the inter calling of symbolic function is solved in later work.

@justinchuby
Copy link
Collaborator Author

adding _export to ensure __all__ is unchanged

Done

This is the 4th PR in the series of #83787. It enables the use of `onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)



[ghstack-poisoned]
@justinchuby justinchuby added large We think that this is a pretty chunky piece of work ciflow/trunk Trigger trunk jobs on your pull request labels Sep 22, 2022
This is the 4th PR in the series of #83787. It enables the use of `onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)



[ghstack-poisoned]
justinchuby added a commit to justinchuby/pytorch that referenced this pull request Sep 22, 2022
ghstack-source-id: bcc1ca5
Pull Request resolved: pytorch#84448

Snapshot
@justinchuby
Copy link
Collaborator Author

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@facebook-github-bot facebook-github-bot deleted the gh/justinchuby/9/head branch September 25, 2022 14:19
mehtanirav pushed a commit that referenced this pull request Oct 4, 2022
## Summary

The change brings the new registry for symbolic functions in ONNX. The `SymbolicRegistry` class in `torch.onnx._internal.registration` replaces the dictionary and various functions defined in `torch.onnx.symbolic_registry`.

The new registry

- Has faster lookup by storing only functions in the opset version they are defined in
- Is easier to manage and interact with due to its class design
- Builds the foundation for the more flexible registration process detailed in #83787

Implementation changes

- **Breaking**: Remove `torch.onnx.symbolic_registry`
- `register_custom_op_symbolic` and `unregister_custom_op_symbolic` in utils maintain their api for compatibility
- Update _onnx_supported_ops.py for doc generation to include quantized ops.
- Update code to register python ops in `torch/csrc/jit/passes/onnx.cpp`

## Profiling results

-0.1 seconds in execution time. -34% time spent in `_run_symbolic_function`. Tested on the alexnet example in public doc.

### After
```
   └─ 1.641 export  <@beartype(torch.onnx.utils.export) at 0x7f19be17f790>:1
      └─ 1.641 export  torch/onnx/utils.py:185
         └─ 1.640 _export  torch/onnx/utils.py:1331
            ├─ 0.889 _model_to_graph  torch/onnx/utils.py:1005
            │  ├─ 0.478 _optimize_graph  torch/onnx/utils.py:535
            │  │  ├─ 0.214 PyCapsule._jit_pass_onnx_graph_shape_type_inference  <built-in>:0
            │  │  │     [2 frames hidden]  <built-in>
            │  │  ├─ 0.190 _run_symbolic_function  torch/onnx/utils.py:1670
            │  │  │  └─ 0.145 Constant  torch/onnx/symbolic_opset9.py:5782
            │  │  │     └─ 0.139 _graph_op  torch/onnx/_patch_torch.py:18
            │  │  │        └─ 0.134 PyCapsule._jit_pass_onnx_node_shape_type_inference  <built-in>:0
            │  │  │              [2 frames hidden]  <built-in>
            │  │  └─ 0.033 [self]
```

### Before
![image](https://user-images.githubusercontent.com/11205048/188032302-688d881e-860d-4046-bdba-90da54233576.png)

### Start up time

The startup process takes 0.03 seconds. Calls to `inspect` will be eliminated when we switch to using decorators for registration in #84448

![image](https://user-images.githubusercontent.com/11205048/188208910-250f0434-475d-4872-9abc-781535519305.png)

Pull Request resolved: #84382
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
mehtanirav pushed a commit that referenced this pull request Oct 4, 2022
This is the 4th PR in the series of #83787. It enables the use of `@onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `@onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `@onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)

Pull Request resolved: #84448
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request cla signed large We think that this is a pretty chunky piece of work Merged module: onnx Related to torch.onnx open source release notes: onnx torch.onnx related changes that should show up in the release notes skip-pr-sanity-checks topic: bc breaking topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ONNX] Future registration process operators to accommodate quantized:: ops and more

7 participants