-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[inductur][easy] show kernel name in str(ExternKernel) #106990
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106990
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 8a2be3d: NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
ghstack-source-id: d155f9ca4617323b9e011343b7da6ca5ecd19f88 Pull Request resolved: #106990
The string representation of an ExternKernel does not show kernel name. Since kernel name is such an important information for an ExternKernel, this PR adds that. [ghstack-poisoned]
@pytorchbot merge -f "unrelated test failure" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Recently I feel it's a bit painful to run benchmark scripts on my dev environment. E.g., the command below ``` python benchmarks/dynamo/huggingface.py --backend inductor --amp --performance --only YituTechConvBert --training ``` took about 2 minutes to run. It may take even longer for some other models. The command is slow since it - need do dynamo work - verify the model on CPU - run perf tests - compile all the graphs However, often times I only need to debug inductor specific logic like loop ordering and fusion. A lot of the things the script is done are useless for me. Also I only need test one graph at a time (e.g. check fwd graph first and when I'm done, continue to check bwd graph) rather than compiling all the graphs. The graph replayer add a `@save_args` decorator to compile_fx_inner function. When `config.save_args` is true, it will pickle all the arguments to `comple_fx_inner` to the file system. Later on, we can call `load_args_and_run_compile_fx_inner("/tmp/inductor_saved_args/compile_fx_inner_0.pkl")` to replay the graph and compile it with inductor. Replaying the fwd graph took around 60 seconds (maybe this can be further reduced but this is already 2x speedup for dev efficiency) , and it only took around 20 seconds to reach `Scheduler.__init__` method. I also checked `TORCH_COMPILE_DEBUG` flag that already exists. The most similar part of `TORCH_COMPILE_DEBUG` is it can save a graph and it's arguments and later on rerun it. But the difference here is, rather than run the model, we want to call inductor API to compile the model (without even going thru dynamo or aot-autograd). Pull Request resolved: #106952 Approved by: https://github.com/jansel ghstack dependencies: #106990
Stack from ghstack (oldest at bottom):
The string representation of an ExternKernel does not show kernel name. Since kernel name is such an important information for an ExternKernel, this PR adds that.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov