New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI] Use jemalloc for CUDA builds #116900
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/116900
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9318e91 with merge base 0f0020d (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot merge -f "pull is green, should not affect trunk" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
According to @ptrblck it'll likely mitigate non-deterministic NVCC bug See #116289 for more detail Test plan: ssh into one of the cuda builds and make sure that `LD_PRELOAD` is set for the top-level make command Pull Request resolved: #116900 Approved by: https://github.com/atalman
According to @ptrblck it'll likely mitigate non-deterministic NVCC bug See #116289 for more detail Test plan: ssh into one of the cuda builds and make sure that `LD_PRELOAD` is set for the top-level make command Pull Request resolved: #116900 Approved by: https://github.com/atalman Co-authored-by: Nikita Shulga <nshulga@meta.com>
According to @ptrblck it'll likely mitigate non-deterministic NVCC bug
See #116289 for more detail
Test plan: ssh into one of the cuda builds and make sure that
LD_PRELOAD
is set for the top-level make command