Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always create ShapeEnv, always apply unspec logic #103302

Closed
wants to merge 7 commits into from

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Jun 9, 2023

Stack from ghstack (oldest at bottom):

Originally, my goal for this PR was to remove the dynamic_shapes tests in torch/_dynamo/variables/builder.py. However, one thing lead to another, and it turns out that it was easiest to do all of the following in one go:

  • Unconditionally allocate a ShapeEnv, no matter if dynamic_shapes is enabled or not (torch/_dynamo/output_graph.py). There is a small adjustment to export torch/_dynamo/eval_frame.py to account for the fact that a ShapeEnv always exists, even if you're not doing symbolic export.
  • Remove dynamic_shapes test from unspec logic (torch/_dynamo/variables/builder.py), the original goal
  • Specialize strides and storage offset if all sizes are dynamic (torch/fx/experimental/symbolic_shapes.py). This is required to deal with unconditional ShapeEnv: if a ShapeEnv exist, fake tensor-ification may choose to allocate symbols. The idea is that with automatic_dynamic_shapes == False, Dynamo should never request dynamic sizes, but this invariant was not upheld for nontrivial strides/offset.

The rest are just auxiliary fixups from the above:

  • Workaround bug in FakeTensorProp where sometimes it doesn't return a FakeTensor (torch/fx/passes/fake_tensor_prop.py), see [FAILING] Tighten FakeTensorProp assert to require only fake tensor returns. #103395 for follow up
  • Make ShapeProp correctly handle int inputs (torch/fx/passes/shape_prop.py)
  • Disable indexing strength reduction if assume_static_by_default is False (torch/_inductor/codegen/triton.py)
  • Fix hf_T5_generate to NOT toggle assume_static_by_default if dynamic shapes is not enabled (benchmarks/dynamo/common.py); technically this is not necessary anymore but it's in for safety.

Signed-off-by: Edward Z. Yang ezyang@meta.com

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @aakhundov @anijain2305

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jun 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/103302

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ca69264:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Jun 9, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 7080719c9b5976719a9c6fa3fa4744655cb115d5
Pull Request resolved: #103302
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx ipiszy aakhundov

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Jun 9, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: a6857c6cce4d154cda021de23b6f1a86b80e4a21
Pull Request resolved: #103302
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx ipiszy aakhundov

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Jun 9, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 550cba8a8f457dc569474fdd290d56fc12096e32
Pull Request resolved: #103302
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx ipiszy aakhundov

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Jun 9, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 846160c0e503d7f49823ad1521c3478cbc48cc0c
Pull Request resolved: #103302
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx ipiszy aakhundov

[ghstack-poisoned]
@pytorch-bot pytorch-bot bot added the release notes: fx release notes category label Jun 10, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 aakhundov anijain2305

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Jun 11, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 924899944d9c2c61c4e28edb5f8df14f824f215e
Pull Request resolved: #103302
@ezyang ezyang added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 11, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 aakhundov anijain2305

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Jun 11, 2023
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 50472e03d68393c3084bb55b2b4114b60e3483d5
Pull Request resolved: #103302
@ezyang ezyang changed the title Always apply unspec logic Always create ShapeEnv, always apply unspec logic Jun 11, 2023
automatic_dynamic = config.automatic_dynamic_shapes and (
frame_state_entry.size is None or frame_state_entry.size[i] is None
)
dynamic_dims = []
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is reindentation only, as shape_env is always non-None.

@ezyang
Copy link
Contributor Author

ezyang commented Jun 12, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/ezyang/2151/head branch June 15, 2023 14:16
wschin added a commit to microsoft/onnxruntime that referenced this pull request Jun 20, 2023
Fix #16355. The root cause change in PyTorch is
[#103302](pytorch/pytorch#103302), which seem
blocking calling make_fx inside a dynamo backend.

Changes:
1. Move decomposition to `register_backend.py`, so we don't have to call
`make_fx` inside DORT, which triggers a bunch of new exceptions.
2. Remove shape inference based on FakeTensorProp since the FX graph
received from dynamo contains all shapes now.
3. Fix a macro bug so that DORT can build without CUDA.

Before (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
#endif
```
After (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#endif
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
```
The later one looks better since the `ENABLE_TRAINING_TORCH_INTEROP` is
for Python bridge code, not for random-number-generating kernels
`PhiloxGenerator`.
carzh pushed a commit to carzh/onnxruntime that referenced this pull request Jun 27, 2023
Fix microsoft#16355. The root cause change in PyTorch is
[#103302](pytorch/pytorch#103302), which seem
blocking calling make_fx inside a dynamo backend.

Changes:
1. Move decomposition to `register_backend.py`, so we don't have to call
`make_fx` inside DORT, which triggers a bunch of new exceptions.
2. Remove shape inference based on FakeTensorProp since the FX graph
received from dynamo contains all shapes now.
3. Fix a macro bug so that DORT can build without CUDA.

Before (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
#endif
```
After (3),
```
#if defined(USE_CUDA) || defined(USE_ROCM)
  virtual PhiloxGenerator& PhiloxGenerator__Default() = 0;
#endif
#ifdef ENABLE_TRAINING_TORCH_INTEROP
...
#endif
```
The later one looks better since the `ENABLE_TRAINING_TORCH_INTEROP` is
for Python bridge code, not for random-number-generating kernels
`PhiloxGenerator`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants