-
Notifications
You must be signed in to change notification settings - Fork 796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: A deployment story - Using GPUs on GKE #994
Comments
Awesome! 🍰 |
@consideRatio Hi, do you think swapping in pytorch with tensorflow in the dockerfile will work? (changing conda channel and pytorch) |
@koustuvsinha yepp, installing both would also work i think. |
Cool. It sure will be fun to try to use GPUs on Azure AKS. Will report after having a chance to work on it. |
The post is now updated, I think it is easier to read and has a more logical order to the steps taken. It also has some extra verification steps, but still not enough verification steps I think. |
This is related to #992 correct? |
Correction- I meant #992 |
@jzf2101 ah! yepp thanks for connecting this |
Made an update to the text, I added information about autoscaling the GPU nodes. Something resolved itself, I'm not sure what, now it "only" takes 9 minutes + image pulling to get a GPU node ready. |
Which version of Ubuntu is in the Docker Images? I can't find it in the notes. |
@jzf2101 the image I provide in this post is built from jupyter/datascience-notebook (1), built in top of scipy-notebook (2), on top of minimal-notebook (3), on top of base-notebook (4), on top of ubuntu 18.04 aka bionic.
|
@consideRatio Thank you for putting this together! I am currently stuck at Step #5. I get an error when I try to run kubectl logs error: cannot get the logs from *extensions.DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE Suggestions? |
@amanda-tan hmm clueless, but you could do a more explicit command:
Where you would enter your actual pod name |
also the container name as the ds uses initconatiners:
kubectl logs -n kube-system nvidia-driver-installer-alskdjf -c
nvidia-driver-installer
Best,
clkao
…On Fri, 7 Dec 2018 at 16:40, Erik Sundell ***@***.***> wrote:
@amanda-tan <https://github.com/amanda-tan> hmm clueless, but you could
do a more explicit command:
kubectl logs -n kube-system nvidia-driver-installer-alskdjf
Where you would enter your actual pod name
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#994 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEQaD3vxirY2utl8UFaNnN4Lj6GESJwks5u2ilsgaJpZM4X6ezZ>
.
|
@clkao Yes! That worked thank you! Also, just wanted to add that I got this to work -- the profileListConfig did not work for me ; I probably made an error somewhere but just whittling it down to:
worked like a charm. Thank you so much. |
ETA: I guess there is also a Pre-emptible GPU quota which must be increased! That solved #1. |
@amanda-tan yepp this will cost a lot. I don't know how to reduce the cost much, but the experience for the users can be improved greatly with user-placeholders as found in the z2jh 0.8-dev releases available already. Users would not have to wait for the scale up in best case with these. See the "optimizations" section of z2jh.jupyter.org for more info about such autoscaling optimizations. Requires k8s 1.11+ and Helm 2.11+. Having multiple GPUs per node is also a reasonable idea, then the users could share some CPU even though they dont share the GPUs. |
I ran a short course using Jupyterhub and Kubernetes with pre-emptible GPUs and scaled up to about 50 users. I ran the nodes for 8 hours with a total cost of about $75 on Google Cloud. Using 10 CPU/8 GPU clusters worked well for me so that each user had 1 CPU and 1 GPU available. You do need an extra 2 CPUs per node to manage the sub-cluster, otherwise you will have 1 GPU sitting idle per cluster. Use K80 GPUs to keep costs minimized and make sure you are running in a region and zone that has them available. Adding extra RAM to a node is really cheap, so don't be afraid to do that beyond the 6.5 GB per CPU standard for the highmem instances. Make sure you have your quota increase requests in well before you need the nodes for the course because that was one of the more challenging parts for me to get through. You will need the GPUs (all regions) and regional GPU quotas increased. There are also separate quotas for preemptible GPUs versus regular GPUs, so be aware of those. You may also run into issues with quotas on the number of CPUs and the number of IP addresses you can have, so check on all those. |
@astrajohn I got that issue recently myself, and for me it was because:
This is the issue, the switch from the root user to the jovyan user will reset a certain set of paths as can be spotted with @rahuldave there are no GPU-specific user placeholder pods as part of the JupyterHub helm chart, but you can create it yourself by mimicing the statefulset called user-placeholder part of the helm chart and adjusting it slightly. Here is my attempt. # Purpose:
# --------
# To have a way to ensure there is always X numbers of slots available for users
# that quickly needs some GPU.
#
# Usage:
# ------
# 1. Update metadata.namespace, spec.template.spec.priorityClassName,
# spec.template.spec.schedulerName with your namespace and helm release name
# respectively. Verify the namespace matches with where you deployed your
# JupyterHub helm chart by inspecting `kubectl get namespace`, and verify the
# helm release name with `kubectl get priorityclass`.
# 3. Optionally update spec.template.spec.affinity.nodeAffinity to match how
# your JupyterHub helm chart was configured.
# 4. Optionally configure your resource requests to align with what you
# provision your users in the Helm chart.
# 5. kubectl apply -f user-placeholder-gpu-daemonset.yaml
# 6. kubectl scale -n <namespace> sts/user-placeholder-gpu --replicas 1
# 7. You may want to ensure your continuous image puller have a toleration for
# `nvidia.com/gpu=present:NoSchedule` as well to prepull the images, assuming
# you have the same images for the GPU nodes as the CPU nodes.
#
# kubectl patch -n <namespace> ds/continuous-image-puller --type=json --patch '[{"op":"add", "path":"/spec/template/spec/tolerations/-", "value": {"effect":"NoSchedule", "key":"nvidia.com/gpu", "operator":"Equal", "value":"present"}}]'
#
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: jupyterhub
component: user-placeholder-gpu
name: user-placeholder-gpu
namespace: jupyterhub
spec:
podManagementPolicy: Parallel
replicas: 0
selector:
matchLabels:
app: jupyterhub
component: user-placeholder-gpu
serviceName: user-placeholder-gpu
template:
metadata:
labels:
app: jupyterhub
component: user-placeholder-gpu
spec:
affinity:
nodeAffinity:
# Make this either requiredDuring... or preferredDuring... so it
# matches how you configured your Helm chart in
# scheduling.userPods.nodeAffinity.matchNodePurpose.
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: hub.jupyter.org/node-purpose
operator: In
values:
- user
containers:
- image: gcr.io/google_containers/pause:3.1
name: pause
resources:
limits:
# OPTIONALLY: Configure these to align with how you configure your
# users resource limits and requests in the JupyterHub Helm chart.
# This is only relevant if you will have a mix of gpu and non-gpu
# users on this GPU node though, as the limiting resource then could
# end up being something else than the GPUs.
nvidia.com/gpu: 1
requests:
nvidia.com/gpu: 1
priorityClassName: jupyterhub-user-placeholder-priority
schedulerName: jupyterhub-user-scheduler
tolerations:
- effect: NoSchedule
key: hub.jupyter.org_dedicated
operator: Equal
value: user
- effect: NoSchedule
key: hub.jupyter.org/dedicated
operator: Equal
value: user
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate |
I've been playing with NVIDIA's helm chart for injecting GPU drivers etc as an alternative to Google's daemonset, worth looking into: https://github.com/NVIDIA/gpu-operator I think this could provide an upstream helm chart dependency that could be included in response to a values.yaml setting to enable GPUs, which seems like a more z2jh approach? |
Thanks for describing that as an option @snickell ! I think it is an approach that makes sense to document, but not to have as an optional chart dependency. Adding this would be a maintenance challenge too big to keep current i think.
Having a working example in docs with a timestamp on when it worked seems like a good path in between to me. |
Totally makes sense to me to document the setup nicely and move on. Seems like GPU support out of the box would be a "nice to have someday" in the core, or perhaps in a companion chart? It's too bad the GPU setup story is so complex (at least on GKE) in 2020, its deceptively easy to click the "yes give me a GPU in my pool!" button and surprisingly hard to get it all going correctly. IMO A great starting point would be if GKE would allow the GPU node taints to be optional, but of course that's not in our control 🤷🏾♀️ It'd sure be nice if NVIDIA and AMD got together and released gpu-operator :-P In case its helpful:
|
No pressure thought: On the companion chart front, one idea would be to have z2jh-experimental-gpu that pulls z2jh as its helm dependency, and sprinkles in 'best effort no guarantees' GPU bits. In my experience docs rot a lot faster than repos, because repos tend to get issues / PRs faster (for better or worse 🤣) Having an experimental companion chart would also start the process of building a foundation for GPUs that could someday be folded into z2jh mainstream. e.g. if its 2023 and its been a couple years since z2jh-experimental-gpu had major change and a ton of people are using it, you have info on how good the setup is and might think "lets put that in the main chart". Whereas with a documented example, you never quite know who's doing what, and how well its working. |
It's being refactored at the moment, see #1664
That sounds sensible, it's the model used by the BinderHub Helm chart. |
This issue has been mentioned on Jupyter Community Forum. There might be relevant details there: https://discourse.jupyter.org/t/cuda-and-images-problem-on-jupyterhub-k8s/5139/2 |
@meeseeksmachine Thank you for you attention! @consideRatio Dear my friend, I according your tutorial to build my
But, unfortunately, I could still not find Is it because I did not complete step 5? I saw the nvidia information before I built and used this image. Just add below information into
So, tell me how to do? Thanks! |
Has anyone done something similar to this using AWS EKS? I currently have GPU nodes running in my cluster with the official Amazon EKS optimized accelerated Amazon Linux AMIs, which from my understanding, has the NVIDIA drivers and the nvidia-container-runtime (as the default runtime) already installed. I can successfully run a pod based off of the However, I am having trouble constructing my own Docker image on top of one of the jupyter/docker-stacks images, as in this demonstration. I tried using the same Dockerfile shown above, but I cannot successfully run I don't believe there is an equivalent to the nvidia-driver-installer daemonset for AWS EKS, as described above, so that is one difference here, but as described in the AWS docs, it sounds like the official AMI that I'm using should take care of the drivers already. It's hard to tell what's missing here. I'd really appreciate any help! |
@jkovalski I was able to get this working on EKS. There's an nvidia daemonset that you need to run on kubernetes in order for the kubernetes containers to use the nvidia GPUs. I think the AWS AMI includes the drivers, but not the kubernetes daemonset. I followed this tutorial: https://aws.amazon.com/blogs/compute/running-gpu-accelerated-kubernetes-workloads-on-p3-and-p2-ec2-instances-with-amazon-eks/ Then for the user notebook image, I used this: https://hub.docker.com/r/cschranz/gpu-jupyter/ which uses the NVIDIA CUDA image as the base, and installs the jupyter/docker-stacks dependencies on top. |
@jeffliu-LL Hm, so I have the NVIDIA device plugin daemonset already running in my cluster, and the Jupyter singleuser notebook pods are successfully getting scheduled onto my GPU node (p3.2xlarge), so that part is working. I'm also using the official AWS AMI: I also tried using that image that you linked, but when I try to run My issue seems to be related to the Docker image / the pod having access to the underlying GPU. Did you have to add anything to the Docker image to make it work? |
@jeffliu-LL Following up - the image seems okay. I spun up a pod using that base image I referenced above, and I was able to successfully run the |
@jkovalski My singleuser config entry looked like:
The line that might be relevant is the |
@jkovalski and @jeffliu-LL , i think there are probably various ways to get it done, and exact settings likely change with EKS and CUDA versions etc. We have this working on GKE and EKS currently and described the setup in a blog post https://medium.com/pangeo/deep-learning-with-gpus-on-pangeo-9466e25bfd74. It links to our images and config settings which are all open source. One other kubespawner setting we needed was |
@jeffliu-LL @scottyhq Thanks guys. Unfortunately, I still cannot get his working. I suspect it might have to do with the versions of NVIDIA/CUDA, etc. I am using the official Amazon Linux 2 AMI ( |
I'm having trouble getting UPDATE:
People having trouble with
But unless I'm mistaken, this DOES NOT set the toleration on the The problem is that, as far as I can tell, there's no way to accept GPU taint on user-placeholder without accidentally accdepting it on ALL user pods. @consideRatio does that seem correct to you? |
Comment update
Correct, you are required to have two separate statefulsets with user-placeholders for this. And below I provide some code for you to add that to a helm chart that has JupyterHub helm chart as a dependency without needing to copy paste much code etc. @snickell, toleration for that taint is typically automatically provided as part of requesting GPU alongside CPU/Memory. I'm copy pasting a solution. Assuming you have a local chart, that in turn depends on the JupyterHub Helm chart, you can add the following parts to it. values.yaml
_helpers.tpl{{/*
NOTE: This utility template is needed until https://git.io/JvuGN is resolved.
Call a template from the context of a subchart.
Usage:
{{ include "call-nested" (list . "<subchart_name>" "<subchart_template_name>") }}
*/}}
{{- define "call-nested" }}
{{- $dot := index . 0 }}
{{- $subchart := index . 1 | splitList "." }}
{{- $template := index . 2 }}
{{- $values := $dot.Values }}
{{- range $subchart }}
{{- $values = index $values . }}
{{- end }}
{{- include $template (dict "Chart" (dict "Name" (last $subchart)) "Values" $values "Release" $dot.Release "Capabilities" $dot.Capabilities) }}
{{- end }} Dedicated user-placeholder template for GPUs{{- if .Values.userPlaceholderGPU.enabled }}
# Purpose:
# --------
# To ensure there is always X numbers of slots available for users that quickly
# needs a GPU pod.
#
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-placeholder-gpu-p100
spec:
podManagementPolicy: Parallel
replicas: {{ .Values.userPlaceholderGPU.replicas }}
selector:
matchLabels:
component: user-placeholder-gpu-p100
serviceName: user-placeholder-gpu-p100
template:
metadata:
labels:
component: user-placeholder-gpu-p100
spec:
nodeSelector:
gpu: p100
{{- if .Values.jupyterhub.scheduling.podPriority.enabled }}
priorityClassName: {{ .Release.Name }}-user-placeholder-priority
{{- end }}
{{- if .Values.jupyterhub.scheduling.userScheduler.enabled }}
schedulerName: {{ .Release.Name }}-user-scheduler
{{- end }}
tolerations:
{{- include "call-nested" (list . "jupyterhub" "jupyterhub.userTolerations") | nindent 8 }}
{{- if include "call-nested" (list . "jupyterhub" "jupyterhub.userAffinity") }}
affinity:
{{- include "call-nested" (list . "jupyterhub" "jupyterhub.userAffinity") | nindent 8 }}
{{- end }}
terminationGracePeriodSeconds: 0
automountServiceAccountToken: false
containers:
- image: gcr.io/google_containers/pause:3.1
name: pause
resources:
limits:
nvidia.com/gpu: 1
requests:
nvidia.com/gpu: 1
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
{{- end }} |
@consideRatio oh cool, processing now, sorry I replied to myself in an update to my above comment, confusing, my bad |
Wow, thank you for collecting that information so clearly @consideRatio , I appreciate it immensely, your workaround makes very good sense to me, and directly addresses my issue, I'll be trying it in a few minutes 🙏🏽🙏🏽🙏🏽 |
We're having the same issue with
We're using Even in @consideRatio's pull request on Also, I think @jkovalski is having the same issue. How are you guys dealing with that? |
@mohammedi-haroune I've used the closed PRs changes, replacing the start scripts for my image. It would be great to land similar changes to jupyter/docker-stacks. |
You're using root user to run So, this is the line responsible for keeping
|
When i use a root user and switch to another user, for example to first enable sudo for that user, retaining LD_ vars or PATH var is a challenge. It is in the transition that environment variables can be stripped, and i think that change is what ensures those arent stripped. |
Added this to my RUN echo 'Defaults env_delete -= "LD_*"' >> /etc/sudoers.d/added-by-dockerfile |
@mohammedi-haroune thanks for posting this, this has been giving us massive grief too |
@consideRatio Hey mate, thanks for all of the amazing work on this. Is your Docker image meant to have libcuda.so cuda libraries installed? I get this error when I try to import tensorflow, which leads to the fact that there is no symlink libcuda.so Additionally, doesn't look like there's CUDA drivers installed in the same directory. $ ls | grep cuda
libicudata.a
libicudata.so
libicudata.so.60
libicudata.so.60.2 $ python
Python 3.6.3 |Anaconda, Inc.| (default, Nov 9 2017, 00:19:18)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/opt/conda/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/opt/conda/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.6/site-packages/tensorflow/__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/opt/conda/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/opt/conda/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help. |
If anyone wants to use GPU enabled JupyterHub on GKE Autopilot, I have described the details here at this link |
GPU powered machine learning on GKE
To enable GPUs on GKE, this is what I've done. Note that this post is a Work In Progress and will be edited from time to time. To see when the last edit was made, see the header of this post.
Prerequisite knowledge
Kubernetes nodes, pods and daemonsets
A node is represents actual hardware on the cloud, a pod represents something running on a node, and a daemonset will ensure one pod running something is created for each node. If you lack knowledge about kubernetes, I'd recommend learning more at their concepts page.
Bonus knowledge:
This video provides a background allowing you to understand why additional steps is required for this to work: https://www.youtube.com/watch?v=KplFFvj3XRk
NOTE: Regarding taints. GPU nodes will get them on GKE, and pods requesting them will get tolerations, without any additional setup.
1. GKE Kubernetes cluster on a GPU enabled zone
Google has various zones (datacenters), some does not have GPUs. First you must have a GKE cluster coupled with a zone that has GPU access. To find out what zones has GPUs and what kind of GPUs it has, see this page. In overall performance and cost, K80 < P100 < V100. Note that there is also TPUs and that their availability is also zone dependant. This documentation will not address utilizing TPUs though.
Note that GKE Kubernetes clusters comes with a pre-installed with some parts needed for GPUs to be utilized:
nvidia-gpu-device-plugin
. I don't know fully what this does yet.nvidia.com/gpu: 1
properly.2. JupyterHub installation
This documentation assumes you have deployed a JupyterHub already by following the https://z2jh.jupyter.org guide on your Kubernetes cluster.
3. Docker image for the JupyterHub users
I built an image for a basic Hello World with GPU enabled Tensorflow. If you are fine to utilize this, you don't need to do anything further. My image is available as
consideratio/singleuser-gpu:v0.3.0
.About the Dockerfile
I build on top of a jupyter/docker-stacks image to allow JupyterHub to integrate well with. I also pinned
cudatoolkit=9.0
, it is a dependency oftensorflow-gpu
but would install with a even newer version that is unsupported by the GPUs I'm aiming to use, namely Tesla K80 or Tesla P100. To learn more about these compatibility issues see: https://docs.anaconda.com/anaconda/user-guide/tasks/gpu-packages/Dockerfile reference
NOTE: To make this run without a GPU available, you must still install an nvidia driver. This can be done using
apt-get install nvidia-384
, if you do, this must not conflict with thenvidia-driver-installer
daemonset later that still needs to run sadly afaik. This is a rabbithole and hard to maintain I think.3B. Create an image using repo2docker (WIP)
jupyterhub/team-compass#96 (comment)
4. Create a GPU node pool
Create a new node pool for your Kubernetes cluster. I choose a
n1-highmem-2
node with a Tesla K80 GPU. These instructions are written and tested for K80 and P100.Note that there is an issue of using autoscaling from 0 nodes, and that it is a slow process to scale up a GPU node as it needs to start, install drivers, and download the image file - each step takes quite a while. I'm expecting 5-10 minutes of startup for this. I recommend you start out with using a single fixed node while setting this up initially.
For details on how to setup a node pool with attached GPUs on the nodes, see: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus#create
5. Daemonset: nvidia-driver-installer
You need to make sure the GPU nodes gets appropriate drivers installed. This is what the
nvidia-driver-installed
daemonset will do for you! It will install drivers and utilities in/usr/local/nvidia
, which is required for the conda packagetensorflow-gpu
for example to function properly.NOTE: Tensorflow have a pinned dependency on cudatoolkit, and a given cudatoolkit requires a minimum NVIDIA driver version.
tensorflow=1.11
andtensorflow=1.12
requirescudatoolkit=9.0
andtensorflow=1.13
will requirecudatoolkit=10.0
for example,cudatoolkit=9.0
requires a NVIDIA driver of at least version384.81
andcudatoolkit=10.0
requires a NVIDIA driver of at least version410.48
.Set a driver version for the nvidia-driver-installer daemonset to install
The default driver as of writing for the daemonset above, is
396.26
. I struggled with installing that without this daemonset, so I ended up using384.145
instead.Option 1: Use a one liner
Option 2: manually edit the daemonset manifest...
Reference: https://github.com/GoogleCloudPlatform/container-engine-accelerators/tree/master/cmd/nvidia_gpu
6. Configure some spawn options
Perhaps the user does not always need a GPU, so it is good to allow the user to choose instead. This can be done with the following configuration.
Result
Note that this displays a screenshot of the configuration I've utilized, which differs slightly from the example configuration and setup documented in this post.
7. Verify GPU functionality
After you have got a Jupyter GPU pod launched and running, you could verify your GPU works as intended by...
TensorFlow-Examples/notebooks/convolutional_network.ipynb
, and run all cells.Previous issues
Autoscaling - no longer an issue?
UPDATE: I'm not sure why this happened, but it doesn't happen any more for me.
I've had massive trouble autoscaling. I managed to autoscale from 1 to 2 nodes, but it took 37 minutes... Autoscale down worked as it should, with 10 minutes of a unused GPU node for the be scaled down.
To handle the long scale up time, you can configure a long timeout for kubespawner's spawning procedure like this:
Latest update (2018-11-15)
I got autoscaling to work, but it is slow still, it takes about 9 minutes plus the time for your image to be pulled to the new node. Some lessons learned:
The cluster autoscaler runs simulations using a hardcoded copy of kube-scheduler default configuration logic, so utilizing a custom kube-scheduler configuration with different predicates could cause issues. See Getting the CA to play well with a custom scheduler kubernetes/autoscaler#1406 for more info.
I stopped using a dynamically applied label as a label selector (
cloud.google.com/gke-accelerator=nvidia-tesla-k80
). I don't remember if this worked at all with the cluster autoscaler, and that it worked to scale from both 0->1 node and from 1->2 nodes. If you want to select a specific GPU from multiple node pools, I'd recommend adding your own pre-defined labels likegpu: k80
and using them to nodeSelector select on.I started using the default-scheduler instead of the jupyterhub-user-scheduler as I figure it would be safer to not risk there was a difference in what predicates they used even though they may have the exact same predicates configured. NOTE: a predicate is a function that takes information about a node in this case, and returns true or false if the node is a candidate to be scheduled on.
To debug the autoscaler:
kubectl describe pod -n jhub jupyter-erik-2esundell
user-k80
cloudProviderTarget
,registered
andready
.You want all to become
ready
.You can also inspect the node events with
kubectl describe node the-name-of-the-node
:Potentially related:
I'm using Kubernetes
1.11.2-gke.9
, but my GPU nodes apparently have1.11.2-gke.15
.Autoscaling from 0 nodes: kubernetes/autoscaler#903
User placeholders for GPU nodes
Currently the user placeholders can only go to one kind of node pool, and it would make sense to allow the admin to configure how many placeholders for a normal pool and how many for a GPU pool. They are needed for autoscaling ahead of arriving users to not force them to wait for a new node, and this could be extra relevant for GPU nodes as they may need to be created on the fly every time for an arriving real user without the user placeholders.
We could perhaps instantiate multiple placeholder deployment/statefulsets based on a template and some extra specifications.
Pre pulling images specifically for GPU nodes
Currently we can only specify one kind of image puller, pulling all kinds of images to a single type of node. It is pointless to pull and especially to wait for image pulling of unneeded images, so it would be nice to optimize this somehow.
This is tracked in #992 (thanks @jzf2101!)
The future - Shared GPUs
Users cannot share GPUs like they can share CPU, this is an issue. But in the future, perhaps? From what I've heard this is something that is progressing right now.
The text was updated successfully, but these errors were encountered: