Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dtensor][2/n] use op overload instead of function schema #107306

Closed
wants to merge 8 commits into from

Conversation

wanchaol
Copy link
Contributor

@wanchaol wanchaol commented Aug 16, 2023

Stack from ghstack (oldest at bottom):

function schema doesn't provide us anything as we can also get the schema from op._schema, include the op directly in op_schema makes easier for sharding prop to do fake execution, and in principle it should also make the hash comparison faster as we don't need to hash the function schema, instead we just hash the id(op) which is constant

This PR is just a refactor to include op to OpSchema instead of func schema, no other logic changes

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 16, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/107306

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit af9af44 with merge base 6dc56d3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Comment on lines -154 to -159
def __post_init__(self) -> None:
# simple analysis of function schema to determine
# if this is an inplace/out variant, it might not
# be entirely correct, but it's good enough for now.
self.is_inplace = self.func_schema.name[-1] == "_"
self.is_out_variant = "out" in self.func_schema.overload_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this making the check being called every time in every DTensor Op dispatch? Or we cannot cache the value of these two, so using __post_init__ does not help that much?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This post init just try to analyze the operator we are dispatching over is a inplace op or out variant, but we don't really need this information unless for some certain ops (i.e. when we are checking if the op is inplace or out in pointwise ops), so everytime we form a OpSchema we analyze this become redundant, that's why I am deleting it and put this logic inside the op rules directly.

The caching would cache the op_schema's hash only, and because we don't hash these two fields anyways, the cache behavior does not change.

Copy link
Contributor

@fduwjj fduwjj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks good to me and unblock. Only one question here is that, why do we decide to remove is_inplace and is_out_variant out of opSchema?

function schema doesn't provide us anything as we can also get the schema from `op._schema`, include the op directly in op_schema makes easier for sharding prop to do fake execution, and in principle it should also make the hash comparison faster as we don't need to hash the function schema, instead we just hash the `id(op)` which is constant

This PR is just a refactor to include op to OpSchema instead of func schema, no other logic changes

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Sep 13, 2023
This PR switches the usage of fx's shape prop TensorMetadata to
dtensor's own dedicated defined TensorMeta, this is because DTensor
only cares three fields: shape/stride/dtype, all other fields are not
necessary and can be inferred from local_tensor directly. This would
help significantly simplify how we deal with the tensor metadata by not
caring other fields.
Pull Request resolved: #108261
Approved by: https://github.com/fduwjj
ghstack dependencies: #107306
pytorchmergebot pushed a commit that referenced this pull request Sep 13, 2023
We were using make_fx for strategy based propagation so that we can get
a graph and the shape related metadata, this becomes too much overkill
for the sharding propagation purpose. This change refactors the strategy
propagation to remove the graph based propagation, instead just use the
op to index to the strategy functions.

We also just use a fake shape prop instead of relying on fx tracing for
the shape/stride propagation.

for a future possible decomposed propagation, we will exercise different
codepath to enable that

NOTE that this would also greatly reduce the latency of:
1. first time dtensor operations when populating the cache, the first
iter would become faster again!
2. greatly reduce the test_dtensor_ops.py time again, right now the
whole test finished within 2-3 mins again.
Pull Request resolved: #108262
Approved by: https://github.com/fduwjj
ghstack dependencies: #107306, #108261
@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/342/head branch September 16, 2023 14:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants