-
Notifications
You must be signed in to change notification settings - Fork 25.6k
[CPPInductor] Fix another out-of-bounds access #122580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Not sure what was the idea behind `{self.tiling_factor}*sizeof(float)/sizeof({DTYPE_TO_CPP[dtype]})` size calculation ((perhaps copy-n-paste error during the refactor made by #97626 ) ) , but `Vectorized::store(ptr, tiling_factor)` needs at least `tiling_factor` elements, not `tiling_factor/2` (if dtype is int64) Discovered while trying to enable arch64 vectorized inductor
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/122580
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit aa197d4 with merge base 14e348b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This started as a re-land of #105590 but focusing on enabling it on MacOS, but quickly turned into landing very limited platform-specific acceleration at this time (I.e. this PR does not add any NEON accelerated code at all, just enables vectorized compilation for the existing abstractions) Enabling the test harness, uncovered number of latent issues in CPU inductor that were fixed in the following PRS: - #122511 - #122513 - #122580 - #122608 Following was added/changed to enable vectorization code to work on MacOS - Added VecNEON class to `_inductor/codecache.py` that is supported on all AppleSilicon Macs - Added `Vectorized::loadu_one_fourth` to `vec_base.h`, and limit it to 8-bit types - Change 64-bit integral types mapping to `int64_t`/`uint64_t` to align with the rest of the code, as on MacOS, `int64_t` is a `long long` rather than `long` (see #118149 for more details) See table below for perf changes with and without torch.compile using [gpt-fast](https://github.com/pytorch-labs/gpt-fast) running `stories15M` on M2 Pro: | dtype | Eager | Compile (before) | Compile (after) | | ------ | ------ | --------- | --------- | | bfloat16 | 120 tokens/sec | 130 tokens/sec | 156 tokens/sec | | float32 | 158 tokens/sec | 140 tokens/sec | 236 tokens/sec | | float16 | 235 tokens/sec | 81 tokens/sec | 58 tokens/sec | Pull Request resolved: #122217 Approved by: https://github.com/jansel
Not sure what was the idea behind `{self.tiling_factor}*sizeof(float)/sizeof({DTYPE_TO_CPP[dtype]})` size calculation (perhaps copy-n-paste error during the refactor made by #97626 ) , but `Vectorized::store(ptr, tiling_factor)` needs at least `tiling_factor` elements, not `tiling_factor/2` (which would be the case with the original calculation if data type is 64-bit value such as int64) Discovered while trying to enable arch64 vectorized inductor. Minimal reproducer (reproducible on ARMv8 or any x86_64 machine that does not support AVX512): ```python import torch def do_ds(x, y): return torch.diagonal_scatter(x, y) x=torch.ones(10, 10, dtype=torch.int64) y=torch.tensor([ 1, 2, -8, 8, 5, 5, -7, -8, 7, 0]) dsc = torch.compile(do_ds) assert torch.allclose(torch.diagonal_scatter(x, y), dsc(x, y)) ``` Pull Request resolved: #122580 Approved by: https://github.com/Skylion007, https://github.com/jansel
This started as a re-land of #105590 but focusing on enabling it on MacOS, but quickly turned into landing very limited platform-specific acceleration at this time (I.e. this PR does not add any NEON accelerated code at all, just enables vectorized compilation for the existing abstractions) Enabling the test harness, uncovered number of latent issues in CPU inductor that were fixed in the following PRS: - #122511 - #122513 - #122580 - #122608 Following was added/changed to enable vectorization code to work on MacOS - Added VecNEON class to `_inductor/codecache.py` that is supported on all AppleSilicon Macs - Added `Vectorized::loadu_one_fourth` to `vec_base.h`, and limit it to 8-bit types - Change 64-bit integral types mapping to `int64_t`/`uint64_t` to align with the rest of the code, as on MacOS, `int64_t` is a `long long` rather than `long` (see #118149 for more details) See table below for perf changes with and without torch.compile using [gpt-fast](https://github.com/pytorch-labs/gpt-fast) running `stories15M` on M2 Pro: | dtype | Eager | Compile (before) | Compile (after) | | ------ | ------ | --------- | --------- | | bfloat16 | 120 tokens/sec | 130 tokens/sec | 156 tokens/sec | | float32 | 158 tokens/sec | 140 tokens/sec | 236 tokens/sec | | float16 | 235 tokens/sec | 81 tokens/sec | 58 tokens/sec | Pull Request resolved: #122217 Approved by: https://github.com/jansel
Not sure what was the idea behind
{self.tiling_factor}*sizeof(float)/sizeof({DTYPE_TO_CPP[dtype]})
size calculation (perhaps copy-n-paste error during the refactor made by #97626 ) , butVectorized::store(ptr, tiling_factor)
needs at leasttiling_factor
elements, nottiling_factor/2
(which would be the case with the original calculation if data type is 64-bit value such as int64)Discovered while trying to enable arch64 vectorized inductor.
Minimal reproducer (reproducible on ARMv8 or any x86_64 machine that does not support AVX512):
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @amjames @desertfire @chauhang