-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable the ceph-csi driver by default #3562
Conversation
6e0bc33
to
1f3b04d
Compare
In order for the osd service account to reference the cephcluster CR in the owner references, OpenShift requires access to the CephCluster CRD. This adds the minimal changes needed to appease OpenShift. Signed-off-by: travisn <tnielsen@redhat.com>
1f3b04d
to
afc338f
Compare
afc338f
to
c1d0909
Compare
efd34b8
to
6823847
Compare
| cluster. | ||
| There are two CSI drivers integrated with Rook that will enable different scenarios: | ||
| - RBD: This driver is optimized for RWO pod access where only one pod may access the storage | ||
| - CephFS: This driver allows for RWX with one or more pods accessing the same storage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to mention that users can also use RWX with rbd with and RWO with cephfs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question on the messaging. If you use RWX with rbd you're at risk of data corruption unless the application layer manages the locks. Are there CSI docs we could link to that show all the use cases?
| @@ -303,7 +72,7 @@ NAME AGE | |||
| rbd-pvc-snapshot 6s | |||
| ``` | |||
|
|
|||
| In one of your Ceph pod, run `rbd snap ls [name-of-your-pvc]`. | |||
| In the toolbox pod, run `rbd snap ls [name-of-your-pvc]`. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just to mention we are not creating the rbd image with the same name as PVC we need to fix this. this can be done as separate PR
Documentation/ceph-filesystem.md
Outdated
| name: cephfs-pvc | ||
| spec: | ||
| accessModes: | ||
| - ReadWriteOnce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question, as this is ReadWriteOnce, better to use RBD for this use-case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in kube-registry.yaml looks like it is ReadWriteMany
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, it should be ReadWriteMany. I had changed it already in the example yaml, but missed updating this doc.
cmd/rook/ceph/operator.go
Outdated
| @@ -43,6 +43,9 @@ func init() { | |||
| operatorCmd.Flags().DurationVar(&mon.HealthCheckInterval, "mon-healthcheck-interval", mon.HealthCheckInterval, "mon health check interval (duration)") | |||
| operatorCmd.Flags().DurationVar(&mon.MonOutTimeout, "mon-out-timeout", mon.MonOutTimeout, "mon out timeout (duration)") | |||
|
|
|||
| operatorCmd.Flags().BoolVar(&operator.EnableFlexDriver, "enable-flex-driver", true, "enable the rook flex driver") | |||
| operatorCmd.Flags().BoolVar(&operator.EnableDiscoveryDaemon, "enable-discovery-daemon", true, "enable the rook discovery daemon") | |||
|
|
|||
| operatorCmd.Flags().BoolVar(&csi.EnableRBD, "csi-enable-rbd", false, "enable ceph-csi rbd support") | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we change the default value to true for both cephfs and rbd flags?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I'll change these to true, good catch.
| Name: "rook-ceph-csi", | ||
| Namespace: namespace, | ||
| }, | ||
| Data: csiSecrets, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use stringData instead of Data? we can skip []byte conversion above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@netzzer has reported that to work on OpenShift she has seen that Data is required, while StringData fails. I don't recall the details though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for confirmation:
@nehaberry are you using stringData or Data in secret in OCP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Madhu-1 yes we are using "StrinData" in the yaml. Once the resource is created, it automatically shows up as Data with base64 encoded value;
Tested on both OCP 4.1 and OCP 4.2
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.2.0-0.ci-2019-08-04-210315 True False 25h Error while reconciling 4.2.0-0.ci-2019-08-04-210315: the cluster operator monitoring is degraded
e.g.
1. Noted the key for admin
sh-4.2# ceph auth get-key client.admin
AQD1wUdd8KNCOBAA+R58MhsIODdSKL8WakpXCA==sh-4.2#
2. Created the secret yaml
$ cat /tmp/Secretti_vr_p_
apiVersion: v1
kind: Secret
metadata:
name: secret-test-rbd-0512275256
namespace: openshift-storage
stringData:
userID: admin
userKey: AQD1wUdd8KNCOBAA+R58MhsIODdSKL8WakpXCA==
3. Created the secret :
oc -n openshift-storage --kubeconfig /home/nberry/aws-install/aug5-1/auth/kubeconfig create -f /tmp/Secretti_vr_p_ -o yaml
5. Verified the data in created secret
$ oc get secret secret-test-rbd-0512275256 -n openshift-storage -o yaml
apiVersion: v1
data:
userID: YWRtaW4=
userKey: QVFEMXdVZGQ4S05DT0JBQStSNThNaHNJT0RkU0tMOFdha3BYQ0E9PQ==
kind: Secret
metadata:
creationTimestamp: "2019-08-05T06:57:53Z"
name: secret-test-rbd-0512275256
namespace: openshift-storage
resourceVersion: "41528"
selfLink: /api/v1/namespaces/openshift-storage/secrets/secret-test-rbd-0512275256
uid: 58ba319a-b74e-11e9-8637-06e8dff050ca
type: Opaque
6. Able to create pvc
$ oc get sc storageclass-test-rbd-0512280619 -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2019-08-05T06:58:07Z"
name: storageclass-test-rbd-0512280619
resourceVersion: "41607"
selfLink: /apis/storage.k8s.io/v1/storageclasses/storageclass-test-rbd-0512280619
uid: 60f0ac13-b74e-11e9-b036-0284beed546e
parameters:
clusterID: openshift-storage
csi.storage.k8s.io/node-stage-secret-name: secret-test-rbd-0512275256
csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
csi.storage.k8s.io/provisioner-secret-name: secret-test-rbd-0512275256
csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
imageFeatures: layering
imageFormat: "2"
pool: cbp-test-0512275940
provisioner: rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using the client-go in rook, the api will do the base64 coding automatically so it should be the same either way for rook to use data or stringData. I'll stick with data since that's what I tested and is working.
pkg/operator/ceph/operator.go
Outdated
| if EnableDiscoveryDaemon { | ||
| rookDiscover := discover.New(o.context.Clientset) | ||
| if err := rookDiscover.Start(namespace, o.rookImage, o.securityAccount); err != nil { | ||
| return fmt.Errorf("Error starting device discovery daemonset: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to change Error to error
| # Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/ | ||
| reclaimPolicy: Retain | ||
| # clusterID is the namespace where the rook cluster is running | ||
| clusterID: rook-ceph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventhough this can be any string it should be a unique identifier of the cluster. Atleast I used to put fs id in this field. Not sure rook-ceph can cause confusion that , people think its the namespace name :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The storage class for the rook flex driver has used the namespace as the unique identifier for the correct rook instance to be found. It is the simplest concept, so why not use it? With one csi driver per operator (#3373), the driver will also correspond to the operator in that namespace and the clusterID should correspond to that name of the driver, correct?
The overall question is: What does the user need to set this to? I'm hoping he isn't required to go look up the fsid.
|
|
||
| Since this feature is still in [alpha | ||
| stage](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/) | ||
| (k8s 1.12+), make sure to enable `VolumeSnapshotDataSource` feature gate in | ||
| your Kubernetes cluster. | ||
|
|
||
| #### create RBD snapshot-class | ||
| ### SnapshotClass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to mention about SnapshotClass as snapshot support is in alpha state. @Madhu-1 what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good as a separate PR.
| @@ -13,12 +13,12 @@ indent: true | |||
|
|
|||
| # Ceph Examples | |||
|
|
|||
| Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared file system volumes or object storage in a kubernetes namespace. We have provided several examples to simplify storage setup, but remember there are many tunables and you will need to decide what settings work for your use case and environment. | |||
| Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared file system volumes or object storage in a kubernetes namespace. We have provided several examples to simplify storage setup, but remember there are many tunables and you will need to decide what settings work for your use case and environment. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The object storage is not coming through the CSI driver. Do we need to mention it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this page needs to talk about the CSI driver, it's just giving details about the example manifests.
Documentation/ceph-examples.md
Outdated
| @@ -56,9 +52,13 @@ Now we are ready to setup [block](https://ceph.com/ceph-storage/block-storage/), | |||
| Ceph can provide raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in kubernetes pods. The storage class is defined with [a pool](http://docs.ceph.com/docs/nautilus/rados/operations/pools/) which defines the level of data redundancy in ceph: | |||
|
|
|||
| - `storageclass.yaml`: This example illustrates replication of 3 for production scenarios and requires at least three nodes. Your data is replicated on three different kubernetes worker nodes and intermittent or long-lasting single node failures will not result in data unavailability or loss. | |||
| - `storageclass-ec.yaml`: Configures erasure coding for data durability rather than replication. [Ceph's erasure coding](http://docs.ceph.com/docs/nautilus/rados/operations/erasure-code/) is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three nodes. See the [Erasure coding](ceph-pool-crd.md#erasure-coded) documentation for more details. | |||
| - `storageclass-ec.yaml`: Configures erasure coding for data durability rather than replication. [Ceph's erasure coding](http://docs.ceph.com/docs/nautilus/rados/operations/erasure-code/) is more efficient than replication so you can get high reliability without the 3x replication cost of the preceding example (but at the cost of higher computational encoding and decoding costs on the worker nodes). Erasure coding requires at least three nodes. See the [Erasure coding](ceph-pool-crd.md#erasure-coded) documentation for more details. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really doubt any one has tested EC SC with CSI driver. The option is available, but may be its good to have a warning that, its an experiment feature ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EC still only works with the flex driver. I'll add a comment about this.
Documentation/ceph-filesystem.md
Outdated
|
|
||
| reclaimPolicy: Delete | ||
| mountOptions: | ||
| - noexec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisn I put that option in my patch and forgot to change here. Lets mention debug can as the mount option here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this what you're saying?
mountOptions:
- noexec
# Uncomment the following line for debugging
#- debug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of noexec put debug
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok i'll add debug and remove noexec. What is the behavior of the debug option? More logging?
mountOptions:
- debug
| Type: k8sutil.RookType, | ||
| } | ||
| k8sutil.SetOwnerRef(&csiSecret.ObjectMeta, ownerRef) | ||
| if _, err = clientset.CoreV1().Secrets(namespace).Create(csiSecret); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisn its good to validate the err for isAlreadyExist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, nice catch
pkg/operator/ceph/operator.go
Outdated
| @@ -56,6 +56,12 @@ var provisionerConfigs = map[string]string{ | |||
| provisionerNameLegacy: flexvolume.FlexvolumeVendorLegacy, | |||
| } | |||
|
|
|||
| // Whether to enable the flex driver | |||
| var ( | |||
| EnableFlexDriver = true | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisn can we also mention, what are meant by these vars?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@travisn most comments here are queries as I do not have the expertise on where else changes need to be made in Rook code etc. So please treat them as such.
If I understand the documentation changes intent, the idea is that everything is setup by Rook (CSI pods, secrets, configuration etc.) and it just deals with how to create a storage class to consume the same, would this be correct? As it skips a lot of the details about CSI itself (which is good).
| DefaultAttacherImage = "quay.io/k8scsi/csi-attacher:v1.1.0" | ||
| DefaultSnapshotterImage = "quay.io/k8scsi/csi-snapshotter:v1.1.1" | ||
| DefaultAttacherImage = "quay.io/k8scsi/csi-attacher:v1.1.1" | ||
| DefaultSnapshotterImage = "quay.io/k8scsi/csi-snapshotter:v1.1.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(suggestion) Ceph-CSI master (soon to be 2.0.0 version) uses csi-snapshotter version 1.1.0 for kube-v1.13 and 1.2.0 for kube-v1.14. Should we consider pushing it up to the same versions? Tagging @Madhu-1 for comments on the same. (similarly provisioner is 1.3.0 at present)
Also, we need a mechanism to keep Rook up to date with changes to CSI related pod spec in the future (jotting it down here for now)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple thoughts:
- We should start the appropriate automatically by default. If there is a different version depending on the K8s version, then we can detect that and make the right decision.
- The admin can specify the desired images in operator.yaml. If they specify the images in operator.yaml, they should override our defaults.
Would you open a separate issue on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
csi-snapshotter 1.1.0 and 1.2.0 is having breaking changes.
v1.2.0 requires minimum Kube version of 1.14+
I think we can pick the image version based on the Kube version deployed, this can be done as separate PR.
@travisn some image version updating requires rbac changes also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Madhu-1 You're saying the newer snapshotter image requires rbac changes? Sounds fine to just add them to common.yaml with the other rbac.
| @@ -58,16 +58,14 @@ const ( | |||
| DefaultCSIPluginImage = "quay.io/cephcsi/cephcsi:canary" | |||
| DefaultRegistrarImage = "quay.io/k8scsi/csi-node-driver-registrar:v1.1.0" | |||
| DefaultProvisionerImage = "quay.io/k8scsi/csi-provisioner:v1.2.0" | |||
| DefaultAttacherImage = "quay.io/k8scsi/csi-attacher:v1.1.0" | |||
| DefaultSnapshotterImage = "quay.io/k8scsi/csi-snapshotter:v1.1.1" | |||
| DefaultAttacherImage = "quay.io/k8scsi/csi-attacher:v1.1.1" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(suggestion) Ceph-CSI master (soon to be 2.0.0 version) uses csi-attacher version 1.2.0 (both for kubev1.13 and 1.14+). Change it to match the same?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about if we update that in a separate PR? These image tags were wrong to even get it working for now, this is just the minimal fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will send a PR to enable leader election as per ceph-csi PR ceph/ceph-csi#497 , before that I need to make sure things are working fine in OCP with leader election
| csiSecret := &v1.Secret{ | ||
| ObjectMeta: metav1.ObjectMeta{ | ||
| Name: "rook-ceph-csi", | ||
| Namespace: namespace, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(query) The namespace here is the cluster namespace and not the operator namespace, right?
Asking as one of the intentions of multi-cluster support by the same operator, was to retain the CSI pods and config map in the operator namespace, but secrets need not be in the same namespace (operator ns) and will (obviously as the name is "rook-ceph-csi") conflict if done so.
(traced code and see that this uses the cluster based namespace, but asking the query anyway)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#3373 will fix the issue of the operator keeping track of the multi-cluster support in the same namespace as the operator. That will use the configmap rook-ceph-csi-config, while this new secret is called rook-ceph-csi, so they are different resources. This secret is created so a storage class can reference the secret for a cluster. Really this is to automate #3387
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
/lgtm. Thanks @travisn |
Documentation/ceph-block.md
Outdated
| This is because the `replicated.size: 3` will require at least 3 OSDs and as [`failureDomain` setting](ceph-pool-crd.md#spec) to `host` (default), each OSD needs to be on a different nodes. | ||
| This is because the `replicated.size: 3` will require at least 3 OSDs and as [`failureDomain` setting](ceph-pool-crd.md#spec) to `host` (default), each OSD needs to be on a different node. | ||
|
|
||
| **NOTE** This example uses the CSI driver, which is the preferred driver going forward. Examples are found in the [CSI RBD](https://github.com/rook/rook/tree/{{ branchName }}/cluster/examples/kubernetes/ceph/csi/rbd) directory. For an example of a storage class using the flex driver, see the [Flex Driver](#flex-driver) section below, which has examples in the [flex](https://github.com/rook/rook/tree/{{ branchName }}/cluster/examples/kubernetes/ceph/flex) directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding an additional note that only Kubernetes >=1.13 is supporting CSI out of the box (GA status).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@galexrt Comments added about 1.13. Now it reads:
**NOTE** This example uses the CSI driver, which is the preferred driver going forward for K8s 1.13 and newer. Examples are found in the CSI RBD directory.
For an example of a storage class using the flex driver (required for K8s 1.12 or earlier), see the Flex Driver section below, which has examples in the flex directory.
CSI is now the preferred storage driver for Rook. By default both the CSI and flex drivers will be started by the operator. In a future release the flex driver will be deprecated, but for now is still supported. The drivers can be disabled with an environment variable in the operator deployment in operator.yaml. If users are only using one driver or the other there is no need to enable both drivers. Signed-off-by: travisn <tnielsen@redhat.com>
The version of the csi driver to be installed that is officially supported is the default baked into rook. If a different version needs to be specified, it is officially unsupported. Therefore, we leave the csi driver version settings unspecified in operator.yaml so we can officially support the defaults. Signed-off-by: travisn <tnielsen@redhat.com>
The flex driver does not need to be started if only the CSI drivers are going to be used. Therefore, we allow the admin to stop launching the rook flex agent with the setting ROOK_ENABLE_FLEX_DRIVER in operator.yaml Signed-off-by: travisn <tnielsen@redhat.com>
The discovery daemon is only necessary to be run when OSDs are being created on raw devices and detect when new devices are added to the cluster. If OSDs do not need to be configured on devices, the discovery daemonset has no need to be started. Signed-off-by: travisn <tnielsen@redhat.com>
The CSI driver requires the ceph key for administrating the cluster. The key is stored in a secret and specified in the storage class. The operator will now automatically generate a secret named rook-ceph-csi that can be specified by default in the storage class instead of requiring the admin to create it. Signed-off-by: travisn <tnielsen@redhat.com>
e8ad3cb
to
da8332a
Compare
| @@ -114,6 +120,44 @@ kubectl delete -n rook-ceph cephblockpools.ceph.rook.io replicapool | |||
| kubectl delete storageclass rook-ceph-block | |||
| ``` | |||
|
|
|||
| ## Flex Driver | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we still want to list flexvolume driver here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users on 1.12 or earlier still need the flex driver. Does it need more explanation in this section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nah that should be fine then. Let's hope the users will read the above note then :-)
| spec: | ||
| failureDomain: host | ||
| replicated: | ||
| size: 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we have size: 3 here, should there be a note that this is for production and people hopefully don't try it in Minikube?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a note higher on the page about the replication. Think we need to repeat it here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Let's keep it as is and change it later if we see that people don't keep the higher note in mind.
Description of your changes:
A key feature of the v1.1 release will be that the CSI driver is the recommended driver to replace the flex driver. For now both drivers are supported, but it is anticipated that the flex driver will be deprecated soon both by Rook and K8s.
The changes here include:
@phlogistonjohn I'm sure we will have some conflicts to resolve with #3373
Checklist:
make codegen) has been run to update object specifications, if necessary.