New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PVC not working with K3S #85
Comments
Has a StorageClass been set up? |
You must add storage class before create pvc, k3s is support storage class like ceph, heketi ??? i don`t know it, it need be test... I will do it. |
k3s doesn't come with a default storage class. We are looking at including https://github.com/rancher/local-path-provisioner by default which just uses local disk, such that PVCs will at least work by default. You can try that storage class or install another third party one. More info here |
@ibuildthecloud yeah I tried that local-storage class this morning, could not get it to work... |
@ibuildthecloud, I was able to confirm that it works using the local-path storageClass. After following the installation instructions in the README.md, I tested it by passing it this and it worked; I have a running SonarQube instance up now:
Both my I also followed Rancher instructions for setting up Tiller after installing Helm: Additionally, I symlinked the k3s kube config install location over to the default location at: ~/.kube/config just in case. Helm seemed to have trouble finding it early on in my testing. Here is my Helm version:
I am running PopOS (System76's Ubuntu 18.04 spin) with kernel version 4.18.0-15-generic. Recommend updating the docs to prescribe the use of the custom storageClass for persistent volumes, and maybe creating some flags in the curl-pipe-to-bash installation script that afford the user the option to install Helm and the local-path storageClass separately to keep the binary tiny. An Ansible playbook/role that performs the installation of both |
I can't also use PVCs using StorageOS:
Then I've created a Secret and a Cluster. Now to test it, I went to https://docs.storageos.com/docs/platforms/kubernetes/firstvolume/ to deploy my first volume and pod but it can't get the persistent volume created:
Looking at the Persistent Volume Claim, there is an error there:
The StorageClass fast seems to be created correctly:
The StorageOS cluster seems to be running correctly too:
I've edited my IPs but they are correctly set to my master and slave. |
I can confirm PVCs work with the localPath provisioner as described above, following the instructions in the corresponding Readme works just fine. For very small deployments, this may be all you ever need (it is for me). |
Hey guys, I was able to create dynamic pv/pvc in my k3s using external-storage (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs), I ran the tests with |
local-path-provisioner doesn't work for me because it isn't built for ARM |
I was able to get the local provisioner working. It'd be great if it enabled by default.
|
Just adding my 2 cents. I was able to get my PVC's working using NFS (required for my specific case). I have followed https://github.com/kubernetes-incubator/external-storage/tree/master/nfs and even after doing all the steps I was still getting errors when bounding. The solution was to install nfs-commons on my Ubuntu 18.04 nodes. "sudo apt-get install nfs-common" . |
@saturnism I can confirm it works for busypvc.yaml, but step1 one was not necessary for me... |
@optimuspaul writes
I have an updated branch of I have a - image: rancher/local-path-provisioner:v0.0.9
+ image: tamsky/local-path-provisioner:562d008-dirty at: https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml#L63 Feedback solicited. |
Any recommendations on what to use if "ReadWriteMany" is required? This doesn't seem to support that access mode. |
I got Rook working rather well on k3s, but it was too RAM-hungry for my RPi3B cluster 😬 |
@jbhannah How did you manage to get it working? Did you use the standard installation method? |
@jose-sanchezm Yeah, it was largely a standard install via |
Still the same in release v0.9.1, let's hope the next release will fix it... |
FWIW, I am able to successfully run rook-ceph in a k3s-provisioned cluster (0.9.1). It may be related to setting the proper CSI path and installing rook using CSI instead of flexvolume (CSI is now the default in rook v0.9.x +):
See https://github.com/billimek/k8s-gitops/blob/master/rook-ceph/chart/rook-ceph-chart.yaml#L15-L17 for context. |
Related: #840 |
with k3s v0.10.0-rc1, we have pvc working using local-path
|
Fixed with 1.10, tested it with k3d v1.3.4!
The PVC gets created under a path like:
If you create a file in /mnt, you will see it there. |
does K3S support readwritemany if so how to enable it ..??? |
It does if the volume driver does... there's nothing special about K3s's storage support. |
Describe the bug
Pods using PVCs are not starting.
To Reproduce
Do a
kubectl apply -f busypvc.yaml
wherebusypvc.yaml
is:With another cluster, it takes 10s to have a running busybox container, but here it is in a pending state forever:
A describe over the pvc give:
Expected behavior
That the busybox pod is running with the defined pvc mounted.
The text was updated successfully, but these errors were encountered: