-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable PVCs for per-user persistent storage #22
Conversation
Remember that tearing down a cluster doesn't clean up the PVCs provisioned, and you have to actually delete the PVCs before tearing down the cluster manually. This relies on a default storage provisioner being present on the k8s cluster. On GKE, this gives you 10Gi of stnadard non-ssd storage.
I tried this and the PVC doesn't appear to get attached to the pod for the notebook. I tried looking at https://github.com/jupyterhub/kubespawner/blob/master/kubespawner/spawner.py and I see where the claim gets created but I don't see where its attached to the pod. |
Ah, right. we've to set the volumes too explicitly, see https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/images/hub/jupyterhub_config.py#L79 for the extra bits needed. Specifically |
I updated it! |
This works. Thanks for the quick fix. |
We should add instructions for it to continue to work on Minikube before merging. (or verify if it already does work) |
This should work on minikube, it ships with a default hostPath-based provisioner. |
Ah, sounds good. As far as I know, that's an addon, but I guess it's enabled by default. We should have a line in the docs that says that we depend on the minikube addon for |
Now my notebook pods seems to be hitting some issues when it tries to startup
I'm going to dry deleting the PV, PVC |
Where are you running this on? It looks like the user can't write to the provisioned pv. I've forgotten how that works on GKE... |
I'm running on GKE. I'm guessing the directory is owned by root so the fact that the container is running as user joyuan is a problem. |
Yup the root directory of the PD is owned by root. I was able to modify the config so I mount the pd in a subdirectory of my home directory. I could then kubectl exec into the container to change the permissions to make it world readable. I'm not sure how we would automate this. |
Can we just run the jupyter containers in this configuration as Unix root? Every user gets a dedicated pod+pd anyway. The per-user model may make more sense in a shared NFS type of setup perhaps. |
That's one option. It looks the juptyer container runs the script start-singleuser.sh as root and then does an su to joyuan. So presumably the startup script could create needed directories and set permissions before issuing the su. |
That sounds like a good idea. To either do it in the startup script, or an init container, in the interest of disallowing users from changing certain aspects of their pod. We should ensure that this is all truly portable as well. |
We have had good success just setting fsGid (there is a traitlet for it, I
forget what it is) to the gid. It works across the clouds we have tried,
and does the initial chown I think. IMO that is a better default than
running as root.
…On Dec 8, 2017 5:45 PM, "Anirudh Ramanathan" ***@***.***> wrote:
That sounds like a good idea. To either do it in the startup script, or an
init container, in the interest of disallowing users from changing certain
aspects of their pod. We should ensure that this is all truly portable as
well.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#22 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAB23lotW4-rbNFXs4dQvU8n5ZQ2hvqiks5s-cn_gaJpZM4Q6mgi>
.
|
Yeah, the trick is to use a filesystem group in the pod spec which will be a gid that will be applied to the filesystem (sticky gid) and also get set on the container process group. Here is an example. It's unfortunate that the default behavior in GKE does not support this. Container images are not easily introspectable. |
Do we have to update kubespawner to set |
Yep, we can set it with |
"}},705da7d71921eb85673912784f0d2c1b51],["CurrentCommunityInitialData",[],{},490],["ResourceWatcher",[],{"module":null},1404],["MPageletUtilities",[],{},358],["MJSEnvironment",[],{"IS_APPLE_WEBKIT_IOS":true,"IS_TABLET":false,"IS_ANDROID":false,"IS_CHROME":false,"IS_FIREFOX":false,"IS_WINDOWS_PHONE":false,"OS_VERSION":6.1,"PIXEL_RATIO":2,"BROWSER_NAME":"Mobile Safari"},46],["ZeroCategoryHeader",[],{},1127],["ZeroRewriteRules",[],{},1478],["UserAgentData",[],{"browserArchitecture":"32","browserFullVersion": |
The ksonnet stuff is great! Going to close this now :) |
Hi, I have followed these steps https://github.com/kubeflow/kubeflow/blob/master/user_guide.md to deploy kubeflow. Can you suggest some solution to this to add persistent volume ? |
@akshaydtada Can you open a new issue please? @inc0 made some recent changes. I suspect an indent error in kube_spawner.py |
… node pool. * Fix kubeflow#22; test flakes by making the GKE resource names depend on the deployment name so multiple deploymnets won't reference the same resources.
Signed-off-by: YujiOshima <yuji.oshima0x3fd@gmail.com>
[pull] master from kubeflow:master
Remember that tearing down a cluster doesn't clean up the PVCs
provisioned, and you have to actually delete the PVCs before
tearing down the cluster manually.
This relies on a default storage provisioner being present
on the k8s cluster. On GKE, this gives you 10Gi of stnadard
non-ssd storage.