-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[dynamic shapes] skip fused linear path if not definitely contiguous #155051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/155051
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 6eba051 with merge base ce9ba07 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
…k/unbacked_contig_lin
aten/src/ATen/native/Linear.cpp
Outdated
// Also hit the fused path for contiguous 3D input, if not using xla | ||
// backend. Reshaping/flattening has some performance implications on xla. | ||
if (input.is_contiguous() && input_dim == 3) { | ||
if (definitely_contiguous(input.sym_sizes(), input.sym_strides(), input.sym_numel()) && input_dim == 3) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see will have to change it again with my change but fine for now. I will handle it once i rebase.
aten/src/ATen/native/Linear.cpp
Outdated
// Also hit the fused path for contiguous 3D input, if not using xla | ||
// backend. Reshaping/flattening has some performance implications on xla. | ||
if (input.is_contiguous() && input_dim == 3) { | ||
if (definitely_contiguous(input.sym_sizes(), input.sym_strides(), input.sym_numel()) && input_dim == 3) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no caching in the current form can you call
definitely_contiguous(input.sym_sizes(), input.sym_strides(), input.sym_numel()) only one time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems good as long as you are sure those are not material checks, (just short circuits).
Just make sure you call definitely_contiguous once before you land.
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 jobs have failed, first few of them are: linux-aarch64 / linux-jammy-aarch64-py3.10 / test (default, 2, 3, lf.linux.arm64.m7g.4xlarge), linux-aarch64 / linux-jammy-aarch64-py3.10 / test (default, 4, 4, lf.linux.arm64.2xlarge) Details for Dev Infra teamRaised by workflow job |
…k/unbacked_contig_lin
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Falls back to non-fused linear -> add bias path for non-contiguous tensors with unbacked sizes