Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use local NFS server as PersitentVolume #410

Closed
jrabary opened this issue Mar 12, 2018 · 6 comments
Closed

Use local NFS server as PersitentVolume #410

jrabary opened this issue Mar 12, 2018 · 6 comments

Comments

@jrabary
Copy link

jrabary commented Mar 12, 2018

Hi all,

We are testing kubeflow on a local cluster with 2 nodes and an additional NAS in our network as NFS server to store data. To create the PersistentVolume we use the following configuration:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: bdd2-nfs
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.8.26.76
    path: "/testnfs

To use it with kubeflow we followed the user guide and use the disk like so:

ks param set --env=default kubeflow-core disks bdd2-nfs

but we are not able to spawn a new docker from jupyter hub and got the following errors :

with the bdd2-nfs-provisioner deployment:

MountVolume.SetUp failed for volume "bdd2-nfs" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e94d93f7-2613-11e8-ab75-0cc47ae225f6/volumes/kubernetes.io~gce-pd/bdd2-nfs --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/bdd2-nfs /var/lib/kubelet/pods/e94d93f7-2613-11e8-ab75-0cc47ae225f6/volumes/kubernetes.io~gce-pd/bdd2-nfs Output: Running scope as unit run-r165813602fe94c58a1b7f9eaedd13734.scope. mount: special device /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/bdd2-nfs does not exist

What is missing to get it works ?

@pdmack
Copy link
Member

pdmack commented Mar 20, 2018

The current NFS provisioner assumes that the persistent disk is created in GCE with gcloud compute disks create ..., thus that error if you are not deploying in that environment with the associated persistent storage. I'm investigating some changes to make it more general.

@jlewi
Copy link
Contributor

jlewi commented Mar 20, 2018

Related issue is #34 (UI element to specify volume claims).

@pdmack Do you have a suggestion about how to combine these issues into a single feature request for our 0.2.0 release?

@pdmack
Copy link
Member

pdmack commented Mar 20, 2018

For #410, I was thinking if we could param the selection of the Volume type. However, it's a block of config:

volumes: [{
                name: diskName,
                gcePersistentDisk: {
                  pdName: diskName,
                },
...
                name: diskName,
                nfs: {
                  server: nfsHost,
                  path: nfsPath + diskName,
                },

@jlewi My ksonnet fu is weak. Is it possible to import fragments like that? I actually have adapted my deployment to bind the Kubeflow NFS provider to a host NFS server. Seems to work once you get through the NFS hoops.

For #34, we would have to re-deploy the hub with any new storage components (e.g., nfs bound to hostPath, gcePersistentDisk, etc.).

@jlewi
Copy link
Contributor

jlewi commented Mar 26, 2018

@pdmack We could refactor our ksonnet configs to make it easy for users to define additional PV/PVCs.

Are TF-Serving component is a good example
https://github.com/kubeflow/kubeflow/blob/master/kubeflow/tf-serving/tf-serving.libsonnet

So we could structure our ksonnet config to have a top level component which is a map of volumes and volumeMounts. We could then use late binding to allow users to easily extend/override that and have the value be used in our kube spawner.

I think the solution I favor though is #34.

So I'm going to close this issue in favor of that issue.

Duplicate of #34

@jlewi jlewi closed this as completed Mar 26, 2018
@jlewi
Copy link
Contributor

jlewi commented Mar 26, 2018

Duplicate of #34

@jlewi jlewi marked this as a duplicate of #34 Mar 26, 2018
@mro-aaskandani
Copy link

Having similar issue with local auto-mount nfs server, I am trying to create a PV like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: hub-storage-u4-1
  namespace: kubeflow
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: "/kubeapp"
    server: "127.0.0.1"
    readOnly: false

I tried both "127.0.0.1" and "localhost" but I was not able to Spawn server.
Any idea how to use an auto mount nfs server on a local 5 node cluster?

yanniszark pushed a commit to arrikto/kubeflow that referenced this issue Nov 1, 2019
* first pass on remaining applications

* update crds

* application for istio-crds, istio-install, istio
yanniszark pushed a commit to arrikto/kubeflow that referenced this issue Nov 1, 2019
* Revert "Metadata fix (kubeflow#424)"

This reverts commit 019d532.

* Revert "kustomization: add missing images to kustomizations (kubeflow#405)"

This reverts commit 967b6e0.

* Revert "Add new updated Jupyter notebook images (kubeflow#401)"

This reverts commit c202602.

* Revert "first pass on remaining applications (kubeflow#410)"

This reverts commit 7a6d519.

* Revert "App instance (kubeflow#342)"

This reverts commit e97671c.
yanniszark pushed a commit to arrikto/kubeflow that referenced this issue Feb 15, 2021
* Add create time to Trial API

* Add Trial create time information

* Fix UT for db
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants