-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[inductor] structured-log graph execution order + test #160448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160448
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New Failures, 1 Unrelated FailureAs of commit a60a83a with merge base 74124d1 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchmergebot revert -c ghfirst -m "internal failure please see associated diff" |
|
@pytorchbot successfully started a revert job. Check the current status here. |
…)" This reverts commit 995397d. Reverted #160448 on behalf of https://github.com/atalman due to internal failure please see associated diff ([comment](#160448 (comment)))
|
@skarjala your PR has been successfully reverted. |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 3 checks: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu), trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 4, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu), trunk / linux-jammy-rocm-py3.10 / test (default, 1, 2, linux.rocm.gpu.gfx942.1) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
- Emit a structured trace per compiled graph execution to reconstruct execution order in TLParse.
- Adds debug.log_graph_execution(name) called from `CompiledFxGraph.__call__`, producing an artifact named inductor_graph_execution with payload {"graph": "graph_<id>"}.
Testing:
- Add inline test to verify structure and output
Pull Request resolved: pytorch#160448
Approved by: https://github.com/xmfan
…ch#160448)" This reverts commit 995397d. Reverted pytorch#160448 on behalf of https://github.com/atalman due to internal failure please see associated diff ([comment](pytorch#160448 (comment)))
Summary:
- Emit a structured trace per compiled graph execution to reconstruct execution order in TLParse.
- Adds debug.log_graph_execution(name) called from `CompiledFxGraph.__call__`, producing an artifact named inductor_graph_execution with payload {"graph": "graph_<id>"}.
Testing:
- Add inline test to verify structure and output
Pull Request resolved: pytorch#160448
Approved by: https://github.com/xmfan
Summary:
CompiledFxGraph.__call__, producing an artifact named inductor_graph_execution with payload {"graph": "graph_"}.Testing:
Stack from ghstack (oldest at bottom):
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @Lucaskabela