-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade CUDA from 9.1 to 10.0 #8482
Upgrade CUDA from 9.1 to 10.0 #8482
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @fifar! |
Hi @fifar. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
5bb1a67
to
9b2ed91
Compare
9b2ed91
to
ffbd7d7
Compare
/ok-to-test |
Thanks for the update! Do you think you could update the documentation with more recent versions that you've tested this on? https://github.com/kubernetes/kops/tree/master/hooks/nvidia-device-plugin#prerequisites otherwise can get this merged and update the docs in a separate PR |
Sure, will update the doc |
7acd9a5
to
ac457e9
Compare
Looks great! Now that I read through the readme I'm wondering about the support for CUDA 9.1. The makefile and docker image will only support CUDA 10.0 correct? It might be weird to have docs that walkthrough setting up CUDA 9.1 if the makefile cant build a CUDA 9.1 image anymore. Though the docs do reference someone else's third party docker image, so theoretically that image should still work with CUDA 9.1. Perhaps we add something to the readme like
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry one more minor thing and then I think its good to merge :) thanks for sticking with this.
that e2e job failure is just a flake so we can retry it if we need to
Thanks! Glad we can finally get this up to date |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fifar, rifelpet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Thanks @rifelpet for your review, especially suggestions |
Hello! I'm trying to use this new cuda10.0
After I create the node I run Some informations about this nvidia-gpu Node:
|
@marcoaleixo The process of a GPU node being ready takes minutes and consists of two steps: 1) the node joins the cluster (say, 2~3 minutes) 2) devices are exposed which is done by the hook container (say, 5~6 minutes). So, after creating the cluster, take a rest, come back and check. |
@fifar thank you for the response. Your command is returning "none" for all my nodes. In AWS console my Node is ready. Are you able to test my docker hub image? Or can you share a Node.yaml config? editI think I found the problem: `kubectl logs -f nvidia-device-plugin-daemonset-bkq57 --namespace=kube-system 2020/05/02 03:49:30 Loading NVML |
@marcoaleixo Not sure what the issue is.
|
@fifar Yeah, same error "Failed to initialize NVML: could not load NVML library." |
@marcoaleixo Below is the example
|
@fifar even with your configuration didn't work :/ |
@marcoaleixo sorry, it couldn't help. Please note that my environment is Kubernetes 1.15.5 + kops 1.15.0, check the first row of the test matrix here. And could you make the gpu node ssh-able and ssh into it, then you can check logs in the directory |
@fifar Connected via SSH I'm running manually every script and when I run the "nvidia-device-plugin.sh" I'm receiving the error
The kubelet is running. Is protokube the main reason of the problem?! |
Well, @fifar Ty! |
Yeah, these init services are tricky. Good to know you finally get your cluster working. @marcoaleixo |
Recent deep learning frameworks like TensorFlow and PyTorch require at least CUDA 10.0.
TensorFlow: https://www.tensorflow.org/install/gpu#software_requirements
PyTorch: https://pytorch.org/get-started/locally/