Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I work around "cannot create directory ‘/home/jovyan/work’: Permission denied" #2567

Closed
NKUCodingCat opened this issue Feb 27, 2019 · 4 comments

Comments

@NKUCodingCat
Copy link

NKUCodingCat commented Feb 27, 2019

I had been try this repo for several days, but when I trying to build a new jupyter notebook, I always got the error like #1241 , however I tried to change pvc mount from /home/jovyan to /home/jovyan/work and add 0777 permission to the host path of pv, it still give a permission denied result.

Is there anyone can help me?

root@kube-worker:~/k8s_src/k8flow/ks_app# kubectl describe pod/jupyter-333 -n kubeflow
Name:               jupyter-333
Namespace:          kubeflow
Priority:           0
PriorityClassName:  <none>
Node:               kube-worker.rc5vvlsr/10.192.146.71
Start Time:         Wed, 27 Feb 2019 09:07:59 +0000
Labels:             app=jupyterhub
                    component=singleuser-server
                    heritage=jupyterhub
Annotations:        hub.jupyter.org/username: 333
Status:             Running
IP:                 10.44.0.32
Containers:
  notebook:
    Container ID:  docker://55c8601ef5fe8fbe2df276fc320a1a3a617d12cd79ca8bcc0967b129dfa39a84
    Image:         gcr.io/kubeflow-images-public/tensorflow-1.10.1-notebook-cpu:v0.4.0
    Image ID:      docker-pullable://gcr.io/kubeflow-images-public/tensorflow-1.10.1-notebook-cpu@sha256:3e547beeeb4eaafd47ccdfb1ee72e80ba76ed88a5a6db7af64d893dde0cb028e
    Port:          8888/TCP
    Host Port:     0/TCP
    Args:
      start-singleuser.sh
      --ip="0.0.0.0"
      --port=8888
      --allow-root
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 27 Feb 2019 09:13:35 +0000
      Finished:     Wed, 27 Feb 2019 09:13:35 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:     1
      memory:  1Gi
    Environment:
      JUPYTERHUB_API_TOKEN:           93af1065a30e47beb9fe9c225d4d552b
      JPY_API_TOKEN:                  93af1065a30e47beb9fe9c225d4d552b
      JUPYTERHUB_CLIENT_ID:           jupyterhub-user-333
      JUPYTERHUB_HOST:                
      JUPYTERHUB_OAUTH_CALLBACK_URL:  /user/333/oauth_callback
      JUPYTERHUB_USER:                333
      JUPYTERHUB_API_URL:             http://jupyter-0:8081/hub/api
      JUPYTERHUB_BASE_URL:            /
      JUPYTERHUB_SERVICE_PREFIX:      /user/333/
      MEM_GUARANTEE:                  1.0Gi
      CPU_GUARANTEE:                  1.0
    Mounts:
      /home/jovyan from volume-0-333 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from jupyter-notebook-token-5fm77 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  volume-0-333:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  333-workspace
    ReadOnly:   false
  jupyter-notebook-token-5fm77:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  jupyter-notebook-token-5fm77
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                    From                           Message
  ----     ------            ----                   ----                           -------
  Warning  FailedScheduling  11m (x16 over 12m)     default-scheduler              pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  7m23s (x12 over 11m)   default-scheduler              0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.
  Normal   Pulled            5m21s (x5 over 6m45s)  kubelet, kube-worker.rc5vvlsr  Container image "gcr.io/kubeflow-images-public/tensorflow-1.10.1-notebook-cpu:v0.4.0" already present on machine
  Normal   Created           5m21s (x5 over 6m45s)  kubelet, kube-worker.rc5vvlsr  Created container
  Normal   Started           5m20s (x5 over 6m45s)  kubelet, kube-worker.rc5vvlsr  Started container
  Warning  BackOff           97s (x25 over 6m43s)   kubelet, kube-worker.rc5vvlsr  Back-off restarting failed container

root@kube-worker:~/k8s_src/k8flow/ks_app# kubectl logs pod/jupyter-333 -n kubeflow
+ [[  --ip="0.0.0.0" --port=8888 --allow-root != *\-\-\i\p\=* ]]
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ '[' '!' -z '' ']'
+ . /usr/local/bin/pvc-check.sh
++ SRC_CONF=/tmp/jupyter_notebook_config.py
++ WORK_DIR=/home/jovyan/work
++ CONF_DIR=/home/jovyan/.jupyter
++ echo 'checking if /home/jovyan volume needs init...'
+++ ls --ignore=lost+found -A /home/jovyan
++ '[' '' ']'
checking if /home/jovyan volume needs init...
...creating /home/jovyan/work
++ echo '...creating /home/jovyan/work'
++ mkdir /home/jovyan/work
mkdir: cannot create directory ‘/home/jovyan/work’: Permission denied
root@kube-worker:~/k8s_src/k8flow/ks_app# kubectl get pvc --all-namespaces -o wide
NAMESPACE   NAME             STATUS   VOLUME                                           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
kubeflow    333-workspace    Bound    k8f-pvgen-517c238b-51d1-4d47-8137-5310cc58c8ed   10Gi       RWO                           30m
kubeflow    7777-workspace   Bound    k8f-pvgen-8ed57d7a-e73b-4317-84bc-c894b6ff8aff   10Gi       RWO                           34m
kubeflow    katib-mysql      Bound    k8f-pvgen-5a30de38-371f-45f7-9db8-325429ee6498   10Gi       RWO                           136m
kubeflow    minio-pv-claim   Bound    k8f-pvgen-f870636f-e8be-4f7c-9c3d-1a3f86eb1e78   10Gi       RWO                           136m
kubeflow    mm-volume-1      Bound    k8f-pvgen-38f4f47a-dead-45b0-b5cf-c0a20bea588d   10Gi       RWX                           131m
kubeflow    mm-workspace     Bound    k8f-pvgen-01bed080-c653-4f47-8704-5a167d9f3e95   10Gi       RWO                           131m
kubeflow    mysql-pv-claim   Bound    k8f-pvgen-d446e8c4-3963-447f-8290-19c2be19a1a8   10Gi       RWO                           136m
root@kube-worker:~/k8s_src/k8flow/ks_app# kubectl get pv --all-namespaces -o wide
NAME                                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
k8f-pvgen-01bed080-c653-4f47-8704-5a167d9f3e95   10Gi       RWO            Retain           Bound    kubeflow/mm-workspace                             129m
k8f-pvgen-38f4f47a-dead-45b0-b5cf-c0a20bea588d   10Gi       RWX            Retain           Bound    kubeflow/mm-volume-1                              119m
k8f-pvgen-517c238b-51d1-4d47-8137-5310cc58c8ed   10Gi       RWO            Retain           Bound    kubeflow/333-workspace                            29m
k8f-pvgen-5a30de38-371f-45f7-9db8-325429ee6498   10Gi       RWO            Retain           Bound    kubeflow/katib-mysql                              136m
k8f-pvgen-8ed57d7a-e73b-4317-84bc-c894b6ff8aff   10Gi       RWO            Retain           Bound    kubeflow/7777-workspace                           129m
k8f-pvgen-d446e8c4-3963-447f-8290-19c2be19a1a8   10Gi       RWO            Retain           Bound    kubeflow/mysql-pv-claim                           136m
k8f-pvgen-f870636f-e8be-4f7c-9c3d-1a3f86eb1e78   10Gi       RWO            Retain           Bound    kubeflow/minio-pv-claim                           136m

i am a absolutely newb for k8s and kubeflow... maybe I misunderstand something and make things mess up?

@pdmack
Copy link
Member

pdmack commented Feb 27, 2019

What is generating the PV? How many nodes in your cluster? If it's a hostPath PV then the permissions need to be set on each node where the PV is being created.

@NKUCodingCat
Copy link
Author

@pdmack It seems work but met some disk pressure or other(Node had run out of disks)....Thus now I am trying to work around it by make a NFS server to make it work ( I am testing it in VMs)

I think I does setup a NFS storage provisioner but how can I let kubeflow to use my NFS server....?

root@kube-worker:~# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   22h

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   0/1     0            0           14m

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-7bcc5c9bd9   1         0         0       14m

@NKUCodingCat
Copy link
Author

Let me start a new group of VMs... I think maybe I had mess something up ...

@NKUCodingCat
Copy link
Author

NKUCodingCat commented Feb 28, 2019

alright I make it works....

Just create a pv yaml as follows:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  namespace: pvtest
spec:
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    # FIXME: use the right IP
    server: 10.130.44.20
    path: "/test/"


But /test should be set to 0777

Ref: https://blog.csdn.net/qianggezhishen/article/details/80762119

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants