Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dtensor][4/n] don't use make_fx for strategy propagation #108262

Closed
wants to merge 3 commits into from

Conversation

wanchaol
Copy link
Contributor

@wanchaol wanchaol commented Aug 30, 2023

Stack from ghstack (oldest at bottom):

We were using make_fx for strategy based propagation so that we can get
a graph and the shape related metadata, this becomes too much overkill
for the sharding propagation purpose. This change refactors the strategy
propagation to remove the graph based propagation, instead just use the
op to index to the strategy functions.

We also just use a fake shape prop instead of relying on fx tracing for
the shape/stride propagation.

for a future possible decomposed propagation, we will exercise different
codepath to enable that

NOTE that this would also greatly reduce the latency of:

  1. first time dtensor operations when populating the cache, the first
    iter would become faster again!
  2. greatly reduce the test_dtensor_ops.py time again, right now the
    whole test finished within 2-3 mins again.

We were using make_fx for strategy based propagation so that we can get
a graph and the shape related metadata, this becomes too much overkill
for the sharding propagation purpose. This change refactors the strategy
propagation to remove the graph based propagation, instead just use the
op to index to the strategy functions.

We also just use a fake shape prop instead of relying on fx tracing for
the shape/stride propagation.

for a future possible decomposed propagation, we will exercise different
codepath to enable that

NOTE that this would also greatly reduce the latency of:
1. first time dtensor operations when populating the cache, the first
iter would become faster again!
2. greatly reduce the test_dtensor_ops.py time again, right now the
whole test finished within 2-3 mins again.

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 30, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/108262

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 5bdc291 with merge base 6dc56d3 (image):

UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

We were using make_fx for strategy based propagation so that we can get
a graph and the shape related metadata, this becomes too much overkill
for the sharding propagation purpose. This change refactors the strategy
propagation to remove the graph based propagation, instead just use the
op to index to the strategy functions.

We also just use a fake shape prop instead of relying on fx tracing for
the shape/stride propagation.

for a future possible decomposed propagation, we will exercise different
codepath to enable that

NOTE that this would also greatly reduce the latency of:
1. first time dtensor operations when populating the cache, the first
iter would become faster again!
2. greatly reduce the test_dtensor_ops.py time again, right now the
whole test finished within 2-3 mins again.

[ghstack-poisoned]
@wanchaol wanchaol added the release notes: distributed (dtensor) release notes category label Aug 30, 2023
return None

def _wrap_output_spec_tensor_meta(
self, output_spec: OutputSpecType, output_tensor_meta: object
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean output_spec is optional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it probably won't be None. I'll submit a follow up PR to do assertion here instead.

Comment on lines +65 to +68
with FakeTensorMode():
fake_args = op_schema.gen_fake_args()
fake_kwargs = op_schema.gen_fake_kwargs()
fake_out = op_schema.op(*fake_args, **fake_kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will help view,split and chunk op a lot.

Copy link
Contributor

@fduwjj fduwjj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

We were using make_fx for strategy based propagation so that we can get
a graph and the shape related metadata, this becomes too much overkill
for the sharding propagation purpose. This change refactors the strategy
propagation to remove the graph based propagation, instead just use the
op to index to the strategy functions.

We also just use a fake shape prop instead of relying on fx tracing for
the shape/stride propagation.

for a future possible decomposed propagation, we will exercise different
codepath to enable that

NOTE that this would also greatly reduce the latency of:
1. first time dtensor operations when populating the cache, the first
iter would become faster again!
2. greatly reduce the test_dtensor_ops.py time again, right now the
whole test finished within 2-3 mins again.

[ghstack-poisoned]
@wanchaol wanchaol added ciflow/trunk Trigger trunk jobs on your pull request ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR labels Sep 12, 2023
@wanchaol
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/351/head branch September 16, 2023 14:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants