Skip to content

Conversation

titaiwangms
Copy link
Contributor

@titaiwangms titaiwangms commented Jun 1, 2023

@titaiwangms titaiwangms added the module: torchlib Related to the torch/aten function lib in development label Jun 1, 2023
@titaiwangms titaiwangms added the help wanted Extra attention is needed label Jun 2, 2023
@titaiwangms titaiwangms changed the title Add op (hsatck) | feat (torchlib) Add op (hsatck & vstack) | feat (torchlib) Jun 2, 2023
@titaiwangms
Copy link
Contributor Author

titaiwangms commented Jun 2, 2023

The op works in test_fx_op_consistency.py, so I figure there might be a bug inside graph_executor on Sequence inputs The op is not even used in FX graph test in test_fx_op_consistency.py...

@justinchuby justinchuby changed the title Add op (hsatck & vstack) | feat (torchlib) Add op (hsatck & vstack) | feat(torchlib) Jun 3, 2023
@fatcat-z
Copy link
Contributor

fatcat-z commented Jun 5, 2023

Currently, for the issue that if/else block may return different shape/type, the alternative is to create 2 functions with different inputs for the same aten op, just like aten_bitwise_not. Could you please change this function as well?

titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 6, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 6, 2023
titaiwangms added a commit that referenced this pull request Jun 6, 2023
…767)

op.SequenceMap comes to the rescue for element iteration cases. (Maybe
it's just me that didn't know this...)
This PR helps complete #762
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 7, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 7, 2023
@torch_op("aten::atleast_1d")
def aten_atleast_1d(self: Sequence[TTensor]) -> TTensor:
"""atleast_1d(Tensor self) -> Tensor"""

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it works.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it works. The error is something else. For this particular case, if we need to move aten::atleast_1d to a private share node, it would be a whole function...

Copy link
Collaborator

@justinchuby justinchuby Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would then lift the whole function. Just don't call one aten in another because that will create dependency assumptions and limit our ability to refactor

return op.SequenceMap(self, body=reshape_to_1d)


@torch_op("aten::atleast_1d", trace_only=True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible to make it not trace only, such that the function name is retained in the graph?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would nested affect performance?

),
xfail(
"vstack",
reason="fixme: A bug of constant-propagation optimization within the subgraph, we can avoid it by turning off graph-optimizations in session options",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we turn it off in test and file a bug to ORT?

Copy link
Contributor Author

@titaiwangms titaiwangms Jun 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would nested affect performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xfail actually works. I think CI breaks because of something else. Actually @gramalingam spot the bug, but I can still file an issue to track.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@titaiwangms titaiwangms merged commit 7f3da13 into microsoft:main Jun 8, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 13, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 14, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 14, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2023
titaiwangms added a commit to pytorch/pytorch that referenced this pull request Jun 15, 2023
pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Jun 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

help wanted Extra attention is needed module: torchlib Related to the torch/aten function lib in development

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants