-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dtensor] remove torchgen function schema and parse manually #90106
Conversation
This PR get rids of torchgen FunctionSchema parsing and parse it manually, it should resolve torchgen package issue and also provide some perf wins when running DTensor eagerly [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/90106
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit fa891ef: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR get rids of torchgen FunctionSchema parsing and parse it manually, it should resolve torchgen package issue and also provide some perf wins when running DTensor eagerly ghstack-source-id: df45231ebe398f97bdfe75de05adecae1faed1ab Pull Request resolved: #90106
self.is_out_variant = ( | ||
schema_kind == SchemaKind.out # pyre-ignore [16] pyre bad at enum | ||
) | ||
# simple analysis of function schema to determine |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conceptual: Does this PR regress the computation of self.is_inplace
and self._is_out_variant
in some cases because you are using a heuristic instead of SchemaKind
?
I am curious how you think about the tradeoff of getting rid of the torchgen
dependency versus possibly having incorrect values for is_inplace
and is_out_variant
in some cases (if at all).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, from the current operator set registered, i think it should work fine as all of the op signature naming following the same convention for inplace and out variant (_
and .out
), so I think this heuristic is relatively safe to use and should not regress existing registered operators. However, it does rely on operators following the same convention of naming, which I think it might get changed. Given that ATen op set is relatively stable at this point, I think we can go with this approach as it's simple and very fast without parsing FunctionSchema, if ever op signature gets changed, the heuristic need to be adapted to accommodate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I have the context to approve, so I will defer to someone else.
out_dt = cast(dtensor.DTensor, kwargs[arg.name]) | ||
out_dt._spec = cast(DTensorSpec, output_specs[spec_idx]) | ||
out_dts.append(out_dt) | ||
spec_idx += 1 | ||
return tuple(out_dts) if len(out_dts) > 1 else out_dts[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General comment: This return
assumes len(out_dts) >= 1
or else there will be an index out of bounds error when accessing out_dts[0]
. I am guessing that this len(out_dts) >= 1
invariant always holds, but I just wanted to point this out. Maybe you want an assert?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll land this as is as I don't want to trigger another full CI run, the assert is added in the follow up PR #90241
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nvm, might still need to change sth, will update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, so I will stamp to unblock. Feel free to wait for others' review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks good to me and do we have an estimate of potential perf win?
This PR get rids of torchgen FunctionSchema parsing and parse it manually, it should resolve torchgen package issue and also provide some perf wins when running DTensor eagerly [ghstack-poisoned]
I don't yet as I didn't profile this, but my suspection is that it could result in around 20-30 perf wins on DTensor execution, feel free to try it out, I'll focus on making other parts be faster if possible. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 additional jobs have failed, first few of them are: linux-binary-manywheel ,linux-binary-manywheel / manywheel-py3_7-cuda11_6-test / build Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -f "failure not related" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…#90106) This PR get rids of torchgen FunctionSchema parsing and parse it manually, it should resolve torchgen package issue and also provide some perf wins when running DTensor eagerly Pull Request resolved: pytorch#90106 Approved by: https://github.com/awgu
) This is a reland of #89845 with nothing changed. This should avoid the internal breakage now that `DTensor` does not import `torchgen` (#90106). Pull Request resolved: #90562 Approved by: https://github.com/fduwjj
Stack from ghstack (oldest at bottom):
This PR get rids of torchgen FunctionSchema parsing and parse
it manually, it should resolve torchgen package issue and also
provide some perf wins when running DTensor eagerly