Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Handle multiple outputs of node #97301

Closed
titaiwangms opened this issue Mar 21, 2023 · 3 comments
Closed

[ONNX] Handle multiple outputs of node #97301

titaiwangms opened this issue Mar 21, 2023 · 3 comments
Labels
module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@titaiwangms
Copy link
Collaborator

fx_name_to_onnxscipt_value[node.name] = output

Let's say we have an Op like topk, which has multiple outputs. Currently, we treat the whole output as an unit, and the graph trying to use each of the output element to different nodes is not supported.

cc @justinchuby @BowenBao

@titaiwangms titaiwangms added module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team labels Mar 21, 2023
@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 22, 2023
@titaiwangms
Copy link
Collaborator Author

fx graph handles multiple outputs with getitem, so each output would be recorded independently.

For example,

class TopKModel(torch.nn.Module):
    def forward(self, x):
        values, _ = torch.topk(x, 3)
        return torch.sum(values)
sclass <lambda>(torch.nn.Module):
    def forward(self, arg0: f32[s0]):
        # File: /home/titaiwang/pytorch/test/onnx/test_fx_to_onnx.py:74, code: values, _ = torch.topk(x, 3)
        topk = torch.ops.aten.topk.default(arg0, 3);  arg0 = None
        getitem: f32[3] = topk[0]
        getitem_1: i64[3] = topk[1];  topk = None
        
        # File: /home/titaiwang/pytorch/test/onnx/test_fx_to_onnx.py:75, code: return torch.sum(values)
        sum_1: f32[] = torch.ops.aten.sum.default(getitem);  getitem = None
        return sum_1

@BowenBao
Copy link
Collaborator

Is there a covering test case already? If not, could you add the above

@titaiwangms
Copy link
Collaborator Author

Yes, I plan to include it in #96350 by extending test_multiple_outputs_op_with_evaluator in test_fx_to_onnx.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: onnx Related to torch.onnx onnx-triaged triaged by ONNX team triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants