Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add decomposition for dynamo_export + ExportedProgram and remove None from input #112444

Closed

Conversation

thiagocrepaldi
Copy link
Collaborator

@thiagocrepaldi thiagocrepaldi commented Oct 30, 2023

Stack from ghstack (oldest at bottom):

This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes None from user inputs as ONNX does not supports them

Tests for these features will be executed when #112289 is merged, but for reference, they are as below:

    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )

@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label Oct 30, 2023
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 30, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112444

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e160a05 with merge base c1e2ccd (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@@ -42,6 +42,9 @@ def generate_fx(
# args=model_args, # type: ignore[arg-type]
# kwargs=model_kwargs, # type: ignore[arg-type]
# )
# TODO: input_adapter and output_adapter should be based on pytree flatten/unflatten API?

model = model.run_decompositions(options.decomposition_table)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put it in pre_export_passes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is tricky. Although we have implemented decomposition as a GraphModule pass for the torch._dynamo.export fx_tracer, torch.export does decomp at the ExportedProgram level. Through the torch.export + ExportedProgram flow, we run decompositions before we start doing passes of the GraphModule (using self.pre_export_passes)

In fact, at generate_fx we call return self.pre_export_passes() passing original_model=model and fx_module=model.graph_module. Note that we extract GraphModule from the ExportedProgram.graph_module after the decomposition was ran. Only then we start doing GraphModule passes (and ignore original_model).

In order to move the ExportedProgram decomposition to pre_export_passes we have to

  1. change model = model.run_decompositions(options.decomposition_table) call (which assigns a ExportedProgram to another ExportedProgram) to fx_module = original_model.run_decompositions(options.decomposition_table).graph_module(which assigns aGraphModulecoming from aExportedProgram; that is the generate_fx's role, not pre_export_passes`)
  2. Always run fx_module: GraphModule = exported_program.run_decompositions as the first pass and
  3. Remove fx_module as the input to the pre_export_passes, meaning generate_fx is not generating the GraphModule anymore

Although techinically possible, it mixes responsibilities of generate_fx and pre_export_passes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. We are missing out diagnostics for decomp though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how should we address this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to think more on that. Good opportunity to explore how to track low level info such as these from other modules.

torch/onnx/_internal/fx/torch_export_graph_extractor.py Outdated Show resolved Hide resolved
…remove None from input"


This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when #112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```



[ghstack-poisoned]
@thiagocrepaldi

This comment was marked as outdated.

This comment was marked as outdated.

@thiagocrepaldi
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: PR 112444 is out of sync with the corresponding revision e22e881 on branch origin/gh/thiagocrepaldi/9/orig that would be merged into main. This usually happens because there is a non ghstack change in the PR. Please sync them and try again (ex. make the changes on origin/gh/thiagocrepaldi/9/orig and run ghstack).

Details for Dev Infra team Raised by workflow job

…remove None from input"


This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when #112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```



[ghstack-poisoned]
@thiagocrepaldi
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/thiagocrepaldi/9/head branch November 5, 2023 15:26
pytorchmergebot pushed a commit that referenced this pull request Nov 6, 2023
Since PyTorch 2.1, torch.export API was introduced and the term "export"
got overloaded due to the already existing torch.onnx.export API.

The torch.onnx.dynamo_export API was introduced on pyTorch 2.0 and it
exposed a torch.onnx.ExportOutput which now can be confused with
torch.export.export output

To prevent such ambiguity and standardize names around the new
torch.export.ExportedProgram, this PR renames torch.onnx.ExportOutput to
torch.onnx.ONNXProgram

Pull Request resolved: #112263
Approved by: https://github.com/BowenBao
ghstack dependencies: #112444
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
… from input (pytorch#112444)

This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when pytorch#112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```

Pull Request resolved: pytorch#112444
Approved by: https://github.com/BowenBao
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Nov 7, 2023
Since PyTorch 2.1, torch.export API was introduced and the term "export"
got overloaded due to the already existing torch.onnx.export API.

The torch.onnx.dynamo_export API was introduced on pyTorch 2.0 and it
exposed a torch.onnx.ExportOutput which now can be confused with
torch.export.export output

To prevent such ambiguity and standardize names around the new
torch.export.ExportedProgram, this PR renames torch.onnx.ExportOutput to
torch.onnx.ONNXProgram

Pull Request resolved: pytorch#112263
Approved by: https://github.com/BowenBao
ghstack dependencies: pytorch#112444
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
… from input (pytorch#112444)

This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when pytorch#112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```

Pull Request resolved: pytorch#112444
Approved by: https://github.com/BowenBao
Skylion007 pushed a commit to Skylion007/pytorch that referenced this pull request Nov 14, 2023
Since PyTorch 2.1, torch.export API was introduced and the term "export"
got overloaded due to the already existing torch.onnx.export API.

The torch.onnx.dynamo_export API was introduced on pyTorch 2.0 and it
exposed a torch.onnx.ExportOutput which now can be confused with
torch.export.export output

To prevent such ambiguity and standardize names around the new
torch.export.ExportedProgram, this PR renames torch.onnx.ExportOutput to
torch.onnx.ONNXProgram

Pull Request resolved: pytorch#112263
Approved by: https://github.com/BowenBao
ghstack dependencies: pytorch#112444
andreigh pushed a commit to andreigh/pytorch that referenced this pull request Nov 19, 2023
… from input (pytorch#112444)

This PR introduces the ability to produce GraphModules with Core ATen IR only through decompositions. It also removes `None` from user inputs as ONNX does not supports them

Tests for these features will be executed when pytorch#112289 is merged, but for reference, they are as below:

```python
    def test_log_sigmoid(self):
        # This produces op as `torch.ops.aten.log_sigmoid_forward`, instead of the more
        # conventional `torch.ops.aten.log_sigmoid`.
        class Model(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.m = torch.nn.LogSigmoid()

            def forward(self, x):
                return self.m(x)

        input = torch.randn(2)
        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            Model(), (input,), model_type=self.model_type
        )

    def test_none_input(self):
        class NoneInputModel(torch.nn.Module):
            def forward(
                self, x: torch.Tensor, y: Optional[torch.Tensor], z: torch.Tensor
            ):
                if y is None:
                    return x + z
                return x + y + z

        self.run_test_with_fx_to_onnx_exporter_and_onnx_runtime(
            NoneInputModel(),
            (torch.randn(1, 2), None, torch.randn(1, 2)),
            model_type=self.model_type,
        )
```

Pull Request resolved: pytorch#112444
Approved by: https://github.com/BowenBao
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: onnx torch.onnx related changes that should show up in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants