-
Notifications
You must be signed in to change notification settings - Fork 509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NCCL] v2.18.5 #7491
[NCCL] v2.18.5 #7491
Conversation
IIUC, looking at Nix, we could build NCCL from https://github.com/NVIDIA/nccl |
I'm wondering what else is required to get this newer version released?
Given that, is it fine to manually upload it for the time? |
I'll do it one last time. I've uploaded the following sources:
|
Thank you for doing this. Something funky is going on with the I have been playing with building nccl, but I am hitting some errors with the builds. I will keep pushing on that front for future builds. |
Looks like it's just the hashes that are wrong; you can use what I posted above. |
I have the hashes you posted (and they match the ones I calculated). I tried to switch to the hashes that Buildkite was seeing, but they changed again when the next build happened, so it was likely using the error that you get when you try to download the sources without a valid session. |
You're using the wrong filenames, |
Release notes
From my understanding, @maleadt or someone else must upload the private NCCL binaries to the Yggdrasil servers. The page to get to the NCCL download links is available here.
The goal of bumping NCCL is to add multi gpu training to XGBoost.jl. I have a working NCCL+XGBoost_jll build with CUDA v11 here.