New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document ceph-csi drivers implementation #2234
Conversation
bab15b1
to
728cac8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some minor comments. Can't wait until this is fully integrated with Rook. Lots of steps for now! @rootfs what if we mark this doc as experimental until the full integration is completed?
Documentation/ceph-csi-drivers.md
Outdated
1. A Kubernetes v1.12+ cluster with at least one node | ||
2. `--allow-privileged` flag set to true in kubelet and your API server | ||
3. An up and running Rook instance (see [Rook - ceph quickstart guide](https://github.com/rook/rook/blob/master/Documentation/ceph-quickstart.md)) | ||
4. Make sure the required feature gates as stated in the [kubernetes-csi setup guide](https://kubernetes-csi.github.io/docs/Setup.html) are enabled in kubelet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we only need these feature gates if it was pre k8s 1.12?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on the feature gate. KubeletPluginsWatcher
is enabled by default in 1.12, but CSINodeInfo
, CSIDriverRegistry
and VolumeSnapshotDataSource
aren't. Should I be more precise on this step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's align with 1.12 requirement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How strict is the v1.12 requirement? Could I theoretically get this working on v1.10 if I find the right feature gates?
728cac8
to
11b5f53
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to have someone independently run these steps to verify, since there is no test automation until after the full integration is done. I'll try to do it by tomorrow.
@mickymiek @rootfs When I ran these steps to start the csi driver, I am not able to get it working. As soon as I deploy either csi-cephfsplugin.yaml or csi-rbdplugin.yaml, the docker daemon dies and the cluster completely stops working. Did I miss enabling some feature gate? Since I'm running k8s
if I connect to the minikube, I can't use the docker daemon anymore. Even
|
@mickymiek i see the related minikube issue. looks like there is a workaround in one of the issues you referenced. |
thanks @mickymiek for putting together all the instructions! I left some comments, let me know when you are ready. |
18e8fa6
to
31a4572
Compare
Sorry for the delay! I resolved most of your comments, but still have a question concerning snapshots: For RBAC details I am not sure, I took them as is from external-snapshotter repo and it worked fine for me. I'll do tests to confirm! |
bb22811
to
ee0d135
Compare
@rootfs I modified the doc to match the new csi-snapshotter yamls in ceph-csi repo |
kubectl create -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml | ||
``` | ||
### Deploy the CSI driver: | ||
Deploys a daemon set with two containers: CSI driver-registrar and the driver. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CSI driver pods are failing to start for both CephFS and RBD. Am I missing something in the setup before this? I'm running in minikube and I believe the feature gates are enabled.
I see there are a number of host paths in the yaml files. Perhaps I need to modify those for my environment? I haven't looked closely at which host paths, but hopefully we can cut back on them. Do we really need them all? This is really a question for ceph-csi rather than this doc.
Here is the error in the pod description for the CephFS driver
Warning Failed 8m8s kubelet, minikube Error: failed to start container "csi-cephfsplugin": Error response from daemon: OCI runtime create failed: open /var/run/docker/runtime-runc/moby/csi-cephfsplugin/state.json: no such file or directory: unknown
Here is the RBD driver pod error:
Warning FailedCreatePodSandBox 3m4s (x26 over 8m26s) kubelet, minikube Failed create pod sandbox: open /etc/resolv.conf: no such file or directory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CSI driver pods are failing to start for both CephFS and RBD. Am I missing something in the setup before this? I'm running in minikube and I believe the feature gates are enabled.
I'm not sure it is related but could you try running in kubeadm to confirm if minikube is the source of your errors or not? If it is we should do something about it
I see there are a number of host paths in the yaml files. Perhaps I need to modify those for my environment?
You shouldn't have to modify it, as you can see here and here, /var/lib/kubelet/plugins/*
are created if not present. I don't think this is related to your issue. As for the hostPath not related to the kubelet plugin dirs, @rootfs surely knows better than me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisn might be a docker issue, can you restart docker? see moby/moby#30984
4861272
to
8bdd2fa
Compare
thanks @mickymiek |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the weight
is updated, I would just suggest that the commits be squashed and we can go ahead and merge it. I haven't had time to investigate why it was failing for me, but since it's working for multiple other contributors, let's go ahead and merge so we can get broader feedback.
Documentation/ceph-csi-drivers.md
Outdated
@@ -0,0 +1,393 @@ | |||
--- | |||
title: Ceph CSI | |||
weight: 28 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other topics have been moved around. Could you change this to weight 32
? That will put it after the Ceph CRD help topics and before the upgrade guide. Sound ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
something we did in the Crossplane project was to make the weights much bigger values, like in the hundreds, so there was more room to move things around without having conflicts. Take for example this doc with a weight of 350.
It might be near time to do that in Rook soon too :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's great if you want to increase the numbers. What I'm really hoping for someday is drag and drop to order the topics. Can you help with that? ;)
Signed-off-by: mickymiek <meunie_m@etna-alternance.net>
8bdd2fa
to
5b8edd2
Compare
Description of your changes:
Documented how to implement ceph-csi drivers in a k8s 1.12+ cluster with Rook.
Which issue is resolved by this Pull Request:
Resolves #2233
Checklist:
make codegen
) has been run to update object specifications, if necessary.[skip ci]