Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mark buffers that reuse other buffers #93329

Closed
wants to merge 9 commits into from

Conversation

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jan 31, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/93329

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ee866d5:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wconstab added a commit that referenced this pull request Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 128472a1accb748df9abf7c2bd0cef6afa6557a7
Pull Request resolved: #93329
@wconstab wconstab added the topic: not user facing topic category label Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 31, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 5348c69491488a6fb152237e0206e722ab64db66
Pull Request resolved: #93329
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 1, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 788ad2630ff33168fb68c7fe9d90a582b6d31872
Pull Request resolved: #93329
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 1, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: 3659a00e79db0bdf94aec717b1660c094aa4c87e
Pull Request resolved: #93329
@wconstab wconstab requested a review from jansel February 1, 2023 01:32
@wconstab
Copy link
Contributor Author

wconstab commented Feb 1, 2023

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 1, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: c1977c5b258d54fb29ffd00399e42db3fa2dc914
Pull Request resolved: #93329
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: New commits were pushed while merging. Please rerun the merge command.

Details for Dev Infra team Raised by workflow job

@wconstab
Copy link
Contributor Author

wconstab commented Feb 1, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed (Rule superuser). The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 1, 2023
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

ghstack-source-id: d379e2c200a7fbd523115355a5ba2c2fee24b0d1
Pull Request resolved: #93329
@wconstab
Copy link
Contributor Author

wconstab commented Feb 2, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed (Rule superuser). The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
Provides a way at codegen time to emit code conditioned on
having a fresh allocation vs reusing an input.

- For collective ops, if reusing an input, a copy can be skipped

cc mlazos soumith voznesenskym yanboliang penguinwu anijain2305 EikanWang jgong5 Guobing-Chen chunyuan-w XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 desertfire

[ghstack-poisoned]
@wconstab
Copy link
Contributor Author

wconstab commented Feb 2, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

ragulpr added a commit to ragulpr/pytorch that referenced this pull request Feb 2, 2023
…n-dev-setup

* origin: (898 commits)
  Move dynamo.optimizations.distributed to backends (pytorch#93408)
  Remove cuda 11.6 from nightly (pytorch#93979)
  Refactor dynamo register_backend/BACKENDS (pytorch#93389)
  Remove cuda 11.6 from CI replace with 11.7 (pytorch#93406)
  [Dynamo] Rename `GuardBuilder.guarded_code` -> `check_fn_manager` (pytorch#93934)
  Revert "Remove CUDA 11.6 from nightly builds (pytorch#93404)"
  Revert "[inductor] fix crash issue when input is a view tensor (pytorch#90150)"
  Basic Validation for FSDP `state_dict` transformations of modules with persistent buffers (pytorch#93396)
  Merge Inductor perf smoke test with other inductor CI tests (pytorch#93395)
  [inductor] Don't import torchvision (pytorch#93027)
  [FSDP][3/N] Refactor `summon_full_params` unit tests (pytorch#92298)
  [FSDP][2/N] `_summon_full_params` -> `_unshard_params` (pytorch#92297)
  Remove CUDA 11.6 from nightly builds (pytorch#93404)
  Mark buffers that reuse other buffers (pytorch#93329)
  Refactor to allow reuse of SchedulerNode.allocate (pytorch#93328)
  retire sparse_mask_helper (pytorch#91714)
  update fbgemm third party (pytorch#93907)
  [inductor] fix crash issue when input is a view tensor (pytorch#90150)
  [Inductor] add config for weight prepacking (pytorch#93811)
  Check for none for NNModuleVariable.__module__ (pytorch#93326)
  ...
@facebook-github-bot facebook-github-bot deleted the gh/wconstab/85/head branch June 8, 2023 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants