-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[FP8] FP8 for SwishLayerNorm #157574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FP8] FP8 for SwishLayerNorm #157574
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/157574
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 153bf89 with merge base 19ae5af ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D76531303 |
LGTM but let's make sure all tests are passed. |
Summary: Add a pass use_triton_fp8_swish_replace_normal_swish to replace _triton_swish_rms_norm with its counterpart that supports fp8 triton_swish_rms_norm, and turn on fp8 during inference. Test Plan: ``` buck2 run mode/opt mode/inplace -c fbcode.platform010_cuda_version=12.4 -c fbcode.nvcc_arch=h100 caffe2/torch/fb/model_transform/experimental/benchmark:mts_gpu_benchmark -- --lower-backend=AOT_INDUCTOR --model-snapshot-id=899072727_0 --node-replacement-dict="{}" --gpu-trace --add-passes=use_triton_fp8_swish_replace_normal_swish ``` The perf improvement on the 100x model with this pass is roughly ~7%, details are recorded [here](https://docs.google.com/document/d/1eIV_OTQyQcf_DlEDxwycTwhyGxT5OJkLzs8cPL6EMYc/edit?tab=t.0) Rollback Plan: Reviewed By: frank-wei Differential Revision: D76531303
545809e
to
153bf89
Compare
This pull request was exported from Phabricator. Differential Revision: D76531303 |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Add a pass use_triton_fp8_swish_replace_normal_swish to replace _triton_swish_rms_norm with its counterpart that supports fp8 triton_swish_rms_norm, and turn on fp8 during inference.
Test Plan:
The perf improvement on the 100x model with this pass is roughly ~7%, details are recorded here
Rollback Plan:
Reviewed By: frank-wei
Differential Revision: D76531303
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv