New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add function to port FX minified graph to HLO via StableHLO #109084
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/109084
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 198acf8 with merge base c1a2f35 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
d631e8b
to
10e1aaa
Compare
Added @anijain2305 and @mlazos as reviewers as this is minifier related--feel free to reassign reviewers if you're not the right people. |
10e1aaa
to
60f77af
Compare
The two failures are unrelated to this PR. Both are due to an Android installation issue (maybe caused by permissions issue/read-only filesystem?)
|
@janeyx99 Seems like @mlazos @anijain2305 may be busy at the moment. Is it possible to ping them, or add another reviewer? There's interest from my team (AWS SageMaker) and StableHLO to get this in as a debugging feature for XLA users. |
Is it possible to get a review on this PR? Have been waiting for about 2 weeks. Thanks! |
If `XLA_HLO_DEBUG` flag is enabled, generate a minified HLO graph when using the minifier. This function enables HLO minification support by porting the minified FX graph to StableHLO via the `save_torch_model_as_stablehlo` function. This allows users to port the FX minified graph to compilers that are not compatible with TorchDynamo/Inductor and use XLA as its backend.
60f77af
to
198acf8
Compare
@anijain2305 @mlazos Can this PR be reviewed this week? We have interest from the AWS SageMaker team plus Google's StableHLO team. Thanks for your time! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apologize for the delay. It lgtm!
No worries @anijain2305. Thanks! Glad to see it get approved! |
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@awskila @anijain2305 After this PR a test started to time out: https://github.com/pytorch/pytorch/actions/runs/6384661915/job/17328135872
I'm not sure if this PR is actually the cause of this, what do you think? |
Hmmm, I took a look at it, and I don't think it is the cause of the test failure. Main reason is that this function is only called when Per the docker container runtime arguments,
Also the FX minifier does not support dynamic shapes. And test_dynamic_shapes does not import the minifier. |
If
XLA_HLO_DEBUG
flag is enabled, generated a minified HLO graph when using the minifier. This function enables HLO minification support by porting the minified FX graph to StableHLO via thesave_torch_model_as_stablehlo
function.This allows users to port the minified graph to compilers that are not compatible with TorchDynamo/Inductor workflow and use XLA instead. The purpose of this PR is to help XLA users debug accuracy and compilation errors. It will also be helpful for existing TorchDynamo/XLA workflow on
torchxla_trace_once
backend as well.Fixes #5461 in Torch XLA repo. CC @GleasonK @qihqi