Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PVC not working with K3S #85

Closed
zoobab opened this issue Feb 28, 2019 · 24 comments
Closed

PVC not working with K3S #85

zoobab opened this issue Feb 28, 2019 · 24 comments
Assignees
Labels
kind/enhancement An improvement to existing functionality
Milestone

Comments

@zoobab
Copy link

zoobab commented Feb 28, 2019

Describe the bug

Pods using PVCs are not starting.

To Reproduce

Do a kubectl apply -f busypvc.yaml where busypvc.yaml is:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: busyboxpv
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busyboxpv
spec:
  selector:
    matchLabels:
      app: busyboxpv
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: busyboxpv
    spec:
      containers:
      - image: busybox
        name: busyboxpv
        command: ["sleep", "60000"]
        volumeMounts:
        - name: busyboxpv
          mountPath: /mnt
      volumes:
      - name: busyboxpv
        persistentVolumeClaim:
          claimName: busyboxpv

With another cluster, it takes 10s to have a running busybox container, but here it is in a pending state forever:

/ # kubectl get pods
NAME                                                    READY   STATUS     RESTARTS   AGE
busyboxpv-77665c79f4-f2fhp                              0/1     Pending    0          3m44s

A describe over the pvc give:

/ # kubectl describe pvc busyboxpv
Name:          busyboxpv
Namespace:     default
StorageClass:
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"busyboxpv","namespace":"default"},"spec":{"accessMo...
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Events:
  Type       Reason         Age                  From                         Message
  ----       ------         ----                 ----                         -------
  Normal     FailedBinding  9s (x19 over 4m20s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set
Mounted By:  busyboxpv-77665c79f4-f2fhp

Expected behavior

That the busybox pod is running with the defined pvc mounted.

@curx
Copy link
Contributor

curx commented Feb 28, 2019

Has a StorageClass been set up?

@prompt-bot
Copy link

You must add storage class before create pvc, k3s is support storage class like ceph, heketi ???

i don`t know it, it need be test... I will do it.

@ibuildthecloud
Copy link
Contributor

k3s doesn't come with a default storage class. We are looking at including https://github.com/rancher/local-path-provisioner by default which just uses local disk, such that PVCs will at least work by default. You can try that storage class or install another third party one. More info here

@zoobab
Copy link
Author

zoobab commented Feb 28, 2019

@ibuildthecloud yeah I tried that local-storage class this morning, could not get it to work...

@ibuildthecloud ibuildthecloud added this to Backlog in K3S Development Feb 28, 2019
@codrcodz
Copy link

codrcodz commented Feb 28, 2019

@ibuildthecloud, I was able to confirm that it works using the local-path storageClass. After following the installation instructions in the README.md, I tested it by passing it this and it worked; I have a running SonarQube instance up now:

helm install --name mytest2 stable/sonarqube --set=postgresql.persistence.storageClass=local-path,persistence.storageClass=local-path

Both my helm and k3s were installed via the provided "curl-pipe-to-bash" scripts in the quick-start instructions for each tool.

I also followed Rancher instructions for setting up Tiller after installing Helm:
https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-init/

Additionally, I symlinked the k3s kube config install location over to the default location at: ~/.kube/config just in case. Helm seemed to have trouble finding it early on in my testing.

Here is my Helm version:

helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}

I am running PopOS (System76's Ubuntu 18.04 spin) with kernel version 4.18.0-15-generic.

Recommend updating the docs to prescribe the use of the custom storageClass for persistent volumes, and maybe creating some flags in the curl-pipe-to-bash installation script that afford the user the option to install Helm and the local-path storageClass separately to keep the binary tiny.

An Ansible playbook/role that performs the installation of both k3s plus some common ancillary tools might not be a bad idea if you did not want to include this as flags for the curl-pipe-to-bash installer. Just so people aren't fighting with it to get a hello world app running.

@jose-sanchezm
Copy link

I can't also use PVCs using StorageOS:
I've installed the StorageOS control plane following the guide at https://docs.storageos.com/docs/platforms/kubernetes/install/1.13 and NOT using helm. It seems that the pods are correctly running:

kubectl get pods --all-namespaces
NAMESPACE            NAME                                        READY   STATUS      RESTARTS   AGE
default              d1                                          0/1     Pending     0          24m
default              ds4m-0                                      1/1     Running     0          29m
kube-system          coredns-7748f7f6df-lxpxx                    1/1     Running     0          29m
kube-system          helm-install-traefik-gkjr7                  0/1     Completed   0          29m
kube-system          svclb-traefik-55f697fb45-dhbb2              2/2     Running     0          29m
kube-system          svclb-traefik-55f697fb45-jpm5j              2/2     Running     1          29m
kube-system          traefik-6876857645-z4glc                    1/1     Running     0          29m
storageos-operator   storageoscluster-operator-64cbc948b-4867v   1/1     Running     0          27m
storageos            storageos-daemonset-bz4gd                   1/1     Running     0          26m
storageos            storageos-daemonset-d52fc                   1/1     Running     0          26m

Then I've created a Secret and a Cluster. Now to test it, I went to https://docs.storageos.com/docs/platforms/kubernetes/firstvolume/ to deploy my first volume and pod but it can't get the persistent volume created:

# kubectl describe pod d1
Name:               d1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             <none>
Annotations:        <none>
Status:             Pending
IP:                 
Containers:
  debian:
    Image:      debian:9-slim
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sleep
    Args:
      3600
    Environment:  <none>
    Mounts:
      /mnt from v1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j5n7q (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  v1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-1
    ReadOnly:   false
  default-token-j5n7q:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j5n7q
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  106s (x32 over 27m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims (repeated 2 times)

Looking at the Persistent Volume Claim, there is an error there:

kubectl describe pvc pvc-1
Name:          pvc-1
Namespace:     default
StorageClass:  fast
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class: fast
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason              Age                  From                         Message
  ----       ------              ----                 ----                         -------
  Warning    ProvisioningFailed  2m2s (x19 over 28m)  persistentvolume-controller  no volume plugin matched

The StorageClass fast seems to be created correctly:

kubectl describe storageclass fast
Name:                  fast
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           kubernetes.io/storageos
Parameters:            adminSecretName=storageos-api,adminSecretNamespace=default,fsType=ext4,pool=default
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

The StorageOS cluster seems to be running correctly too:

# kubectl describe storageoscluster example-storageos
Name:         example-storageos
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  storageos.com/v1alpha1
Kind:         StorageOSCluster
Metadata:
  Creation Timestamp:  2019-03-05T17:44:24Z
  Generation:          32
  Resource Version:    1426
  Self Link:           /apis/storageos.com/v1alpha1/namespaces/default/storageosclusters/example-storageos
  UID:                 50ae68d2-3f6e-11e9-ab2e-22738ce9a901
Spec:
  Csi:
    Device Dir:                       
    Driver Registeration Mode:        
    Driver Requires Attachment:       
    Enable:                           false
    Enable Controller Publish Creds:  false
    Enable Node Publish Creds:        false
    Enable Provision Creds:           false
    Endpoint:                         
    Kubelet Dir:                      
    Kubelet Registration Path:        
    Plugin Dir:                       
    Registrar Socket Dir:             
    Registration Dir:                 
    Version:                          
  Debug:                              false
  Disable Telemetry:                  false
  Images:
    Csi Cluster Driver Registrar Container:  
    Csi External Attacher Container:         
    Csi External Provisioner Container:      
    Csi Liveness Probe Container:            
    Csi Node Driver Registrar Container:     
    Init Container:                          storageos/init:0.1
    Node Container:                          storageos/node:1.1.3
  Ingress:
    Annotations:  <nil>
    Enable:       false
    Hostname:     
    Tls:          false
  Join:           <edited_ip>,<edited_ip>
  Kv Backend:
    Address:            
    Backend:            
  Namespace:            storageos
  Node Selector Terms:  <nil>
  Pause:                false
  Resources:
  Secret Ref Name:       storageos-api
  Secret Ref Namespace:  default
  Service:
    Annotations:    <nil>
    External Port:  5705
    Internal Port:  5705
    Name:           storageos
    Type:           ClusterIP
  Shared Dir:       
  Tolerations:      <nil>
Status:
  Node Health Status:
    <edited_ip>:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
    <edited_ip>:
      Directfs Initiator:  alive
      Director:            alive
      Kv:                  alive
      Kv Write:            alive
      Nats:                alive
      Presentation:        alive
      Rdb:                 alive
  Nodes:
    <edited_ip>
    <edited_ip>
  Phase:  Running
  Ready:  2/2
Events:
  Type     Reason         Age   From                       Message
  ----     ------         ----  ----                       -------
  Warning  ChangedStatus  31m   storageoscluster-operator  0/2 StorageOS nodes are functional
  Warning  ChangedStatus  30m   storageoscluster-operator  1/2 StorageOS nodes are functional
  Normal   ChangedStatus  30m   storageoscluster-operator  2/2 StorageOS nodes are functional. Cluster healthy

I've edited my IPs but they are correctly set to my master and slave.
Is there something extra that needs to be done in order to enable Persistent Volumes in k3s? I tried the same example in a kubernetes cluster installed through kubeadm and it works but since it contains lots of components extra I don't know if there is anyone missing here.

@flxs
Copy link

flxs commented Mar 10, 2019

I can confirm PVCs work with the localPath provisioner as described above, following the instructions in the corresponding Readme works just fine. For very small deployments, this may be all you ever need (it is for me).
As long as this another solution isn't included and enabled by default, could this be mentioned in the k3s Readme, at least in brief? I'd envision lots of people will be hitting this particular obstacle.

@erikwilson erikwilson added kind/documentation Improvements or additions to documentation help wanted labels Mar 25, 2019
@cristianojmiranda
Copy link

Hey guys, I was able to create dynamic pv/pvc in my k3s using external-storage (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs), I ran the tests with helm install --name my-release stable/consul, but I got an issue when I tried to restart my k3s cluster, like docker-compose down, my nodes stuck in finishing state, so I had to restart docker service. Does anybody had some advance about this ?

@optimuspaul
Copy link

local-path-provisioner doesn't work for me because it isn't built for ARM

@saturnism
Copy link
Contributor

I was able to get the local provisioner working. It'd be great if it enabled by default.

  1. Create directory that's needed by local provisioner: sudo mkdir /opt/local-path-provisioner
  2. Install local provisioner: kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
  3. Set the local provisioner as default: kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

@crsantini
Copy link

Just adding my 2 cents. I was able to get my PVC's working using NFS (required for my specific case). I have followed https://github.com/kubernetes-incubator/external-storage/tree/master/nfs and even after doing all the steps I was still getting errors when bounding. The solution was to install nfs-commons on my Ubuntu 18.04 nodes. "sudo apt-get install nfs-common" .

@zoobab
Copy link
Author

zoobab commented May 28, 2019

@saturnism I can confirm it works for busypvc.yaml, but step1 one was not necessary for me...

@tamsky
Copy link
Contributor

tamsky commented May 28, 2019

@optimuspaul writes

local-path-provisioner doesn't work for me because it isn't built for ARM


I have an updated branch of local-path-provisioner that builds binaries for multiple arch and a multi-arch docker image in rancher/local-path-provisioner#24

I have a pv+pvc working on armv7l using their helm chart to deploy the StorageClass, with a custom image name:

-        image: rancher/local-path-provisioner:v0.0.9
+        image: tamsky/local-path-provisioner:562d008-dirty

at: https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml#L63

Feedback solicited.

@erikwilson erikwilson added kind/enhancement An improvement to existing functionality and removed help wanted kind/documentation Improvements or additions to documentation labels Jun 18, 2019
@davidnuzik davidnuzik added this to the v1.0 - Backlog milestone Jul 15, 2019
@fuero
Copy link

fuero commented Jul 17, 2019

Any recommendations on what to use if "ReadWriteMany" is required? This doesn't seem to support that access mode.

@davidnuzik davidnuzik modified the milestones: v1.0 - Backlog, v0.9.0 Aug 1, 2019
@jbhannah
Copy link

I got Rook working rather well on k3s, but it was too RAM-hungry for my RPi3B cluster 😬

@jose-sanchezm
Copy link

@jbhannah How did you manage to get it working? Did you use the standard installation method?

@jbhannah
Copy link

@jose-sanchezm Yeah, it was largely a standard install via helm template. I did have to wait until the OSDs were created before the blockpool, otherwise the blockpool wouldn't initialize, but that may have been more a matter of horsepower than anything to do with k3s.

@zoobab
Copy link
Author

zoobab commented Oct 14, 2019

Still the same in release v0.9.1, let's hope the next release will fix it...

@billimek
Copy link

FWIW, I am able to successfully run rook-ceph in a k3s-provisioned cluster (0.9.1). It may be related to setting the proper CSI path and installing rook using CSI instead of flexvolume (CSI is now the default in rook v0.9.x +):

csi.kubeletDirPath: /var/lib/rancher/k3s/agent/kubelet

See https://github.com/billimek/k8s-gitops/blob/master/rook-ceph/chart/rook-ceph-chart.yaml#L15-L17 for context.

@davidnuzik
Copy link
Contributor

Related: #840

@ShylajaDevadiga
Copy link
Contributor

with k3s v0.10.0-rc1, we have pvc working using local-path

# k get pods -A|grep busybox
default       busyboxpv-69f6d7dd5d-wlw28                1/1     Running     0          50m
# k get storageclass
NAME                   PROVISIONER             AGE
local-path (default)   rancher.io/local-path   83m
# k get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-e62f84c4-75f4-45b4-8472-3044acc66e30   10Mi       RWO            Delete           Bound    default/busyboxpv   local-path              50m
# k get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
busyboxpv   Bound    pvc-e62f84c4-75f4-45b4-8472-3044acc66e30   10Mi       RWO            local-path     51m

@zube zube bot closed this as completed Oct 17, 2019
@zube zube bot removed the [zube]: To Test label Oct 17, 2019
K3S Development automation moved this from Backlog to Done Oct 17, 2019
@zoobab
Copy link
Author

zoobab commented Oct 30, 2019

Fixed with 1.10, tested it with k3d v1.3.4!

$ kubectl get pvc
NAME        STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
busyboxpv   Bound     pvc-ca47be7d-78ef-4693-b568-6022dda3cf4f   10Mi       RWO            local-path     2m50s

The PVC gets created under a path like:

/var/lib/rancher/k3s/storage/pvc-ca47be7d-78ef-4693-b568-6022dda3cf4f 

If you create a file in /mnt, you will see it there.

@nags28
Copy link

nags28 commented Dec 27, 2021

does K3S support readwritemany if so how to enable it ..???
is there any way to do it ?

@brandond
Copy link
Contributor

It does if the volume driver does... there's nothing special about K3s's storage support.

@k3s-io k3s-io locked as resolved and limited conversation to collaborators Dec 27, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/enhancement An improvement to existing functionality
Projects
No open projects
Development

No branches or pull requests