-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[BE]Enhance _get_clean_triton.py to auto-generate launch_params if missing #154666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Added functionality to the process_file function to automatically generate the launch_params file if it does not exist and the auto_generate_params flag is set to True. This includes running the input file in a subprocess with the appropriate environment variable. Updated the get_clean_triton function and the main script to support this new feature, allowing users to disable auto-generation via a command-line argument.
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/154666
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit cb05a8e with merge base 08fdc64 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one nit for the arg interface, otherwise LGTM! Thanks for finding this tool & improving it, I expect I'll be using it a lot!
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
sorry, I didn't see your review comments for the nit. did you forget to save? |
…ssing (pytorch#154666) Previously, @Chillee wrote a script pytorch#125811 to remove inductor dependency for inductor compiled triton kernels. We'd like to automate the process of obtaining the launch parameters. Added functionality to the torch/utils/_get_clean_triton.py to automatically generate the launch_params file if it does not exist and the auto_generate_params flag is set to True. This includes running the input file in a subprocess with the appropriate environment variable. Updated the get_clean_triton function and the main script to support this new feature, allowing users to disable auto-generation via a command-line argument. # Test Plan test embedding op in TritonBench ``` # generate inductor compiled triton kernels TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_FX_GRAPH_CACHE=0 python run.py --op embedding --mode fwd --precision fp32 --metrics nsys_rep --only inductor_embedding --num-inputs 1 --input-id 11 # run the script to get rid of inductor dependency. By default, triton_only_repro.py is the output file name. python ~/pytorch/torch/utils/_get_clean_triton.py ~/tritonbench/torch_compile_debug/run_2025_05_29_14_47_50_497790-pid_849274/torchinductor/model__0_forward_1.0/output_code.py ``` Pull Request resolved: pytorch#154666 Approved by: https://github.com/davidberard98
…ssing (pytorch#154666) Previously, @Chillee wrote a script pytorch#125811 to remove inductor dependency for inductor compiled triton kernels. We'd like to automate the process of obtaining the launch parameters. Added functionality to the torch/utils/_get_clean_triton.py to automatically generate the launch_params file if it does not exist and the auto_generate_params flag is set to True. This includes running the input file in a subprocess with the appropriate environment variable. Updated the get_clean_triton function and the main script to support this new feature, allowing users to disable auto-generation via a command-line argument. # Test Plan test embedding op in TritonBench ``` # generate inductor compiled triton kernels TORCH_COMPILE_DEBUG=1 TORCHINDUCTOR_FX_GRAPH_CACHE=0 python run.py --op embedding --mode fwd --precision fp32 --metrics nsys_rep --only inductor_embedding --num-inputs 1 --input-id 11 # run the script to get rid of inductor dependency. By default, triton_only_repro.py is the output file name. python ~/pytorch/torch/utils/_get_clean_triton.py ~/tritonbench/torch_compile_debug/run_2025_05_29_14_47_50_497790-pid_849274/torchinductor/model__0_forward_1.0/output_code.py ``` Pull Request resolved: pytorch#154666 Approved by: https://github.com/davidberard98
Previously, @Chillee wrote a script #125811 to remove inductor dependency for inductor compiled triton kernels. We'd like to automate the process of obtaining the launch parameters.
Added functionality to the torch/utils/_get_clean_triton.py to automatically generate the launch_params file if it does not exist and the auto_generate_params flag is set to True. This includes running the input file in a subprocess with the appropriate environment variable. Updated the get_clean_triton function and the main script to support this new feature, allowing users to disable auto-generation via a command-line argument.
Test Plan
test embedding op in TritonBench