Skip to content

Conversation

@shunting314
Copy link
Contributor

@shunting314 shunting314 commented Oct 27, 2025

Stack from ghstack (oldest at bottom):

A few things to note:

  1. Customers like vllm use a custom backend (e.g. VllmBackend), split the graph, and call standalone_compile for each split. If we let the bisector override the backend, we won't bisect thru the custom backend. test_configs.bisect_keep_custom_backend_for_inductor is used to keep the custom backend if we are bisecting for inductor.
  2. pre_grad_graph bisecting and lowering bisecting so far does not compose well with each other since an issue may be just captured by the first one we try. test_configs.bisect_pre_grad_graph is used to enable the 'pre_grad_graph' bisecting.

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @Lucaskabela

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166344

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit cb7f67f with merge base a076b4d (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

shunting314 added a commit that referenced this pull request Oct 27, 2025
ghstack-source-id: 98cd506
Pull Request resolved: #166344
@shunting314 shunting314 requested a review from eellison October 27, 2025 22:24
@eellison eellison requested a review from zou3519 October 28, 2025 13:52
Comment on lines +496 to +499
import torch._inductor.config as inductor_config

if inductor_config.test_configs.bisect_pre_grad_graph:
BACKENDS["inductor"].insert(0, BisectSubsystem("pre_grad_graph"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will insert on each invocation. can we make sure we delete this on exit ? You can stick it incleanup below if you need.

@shunting314 shunting314 added the topic: not user facing topic category label Nov 1, 2025
A few things to note:
1. Customers like vllm use a custom backend (e.g. VllmBackend), split the graph, and call standalone_compile for each split. If we let the bisector override the backend, we won't bisect thru the custom backend. `test_configs.bisect_keep_custom_backend_for_inductor` is used to keep the custom backend if we are bisecting for inductor.
2. pre_grad_graph bisecting and lowering bisecting so far does not compose well with each other since an issue may be just captured by the first one we try. `test_configs.bisect_pre_grad_graph` is used to enable the 'pre_grad_graph' bisecting.



cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben Lucaskabela

[ghstack-poisoned]
shunting314 added a commit that referenced this pull request Nov 1, 2025
ghstack-source-id: 442bbee
Pull Request resolved: #166344
@shunting314
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 1, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@zou3519
Copy link
Contributor

zou3519 commented Nov 2, 2025

Is the new config sufficient for using compiler bisector with vLLM or do we have other issues?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants