-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Topology updates to Kubernetes CSI implementation #2034
Topology updates to Kubernetes CSI implementation #2034
Conversation
e22b28b
to
d7709a3
Compare
@@ -116,7 +114,7 @@ Provisioning and deletion operations are handled using the existing [external pr | |||
|
|||
In short, to dynamically provision a new CSI volume, a cluster admin would create a `StorageClass` with the provisioner corresponding to the name of the external provisioner handling provisioning requests on behalf of the CSI volume driver. | |||
|
|||
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). Once the operation completes successfully, the external provisioner creates a `PersistentVolume` object to represent the volume using the information returned in the `CreateVolume` response. The `PersistentVolume` object is bound to the `PersistentVolumeClaim` and available for use. | |||
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). If the `PersistentVolumeClaim` has the `selectedNode` annotation set (TODO verult update to actual annotation name) (only added if delayed volume binding is enabled in the `StorageClass`), the provisioner will get relevant topology labels from the corresponding `Node` and pass them to the `CreateVolume` call as preferred topology. `AllowedTopologies`from the `StorageClass` is passed through as permitted topology. Once the operation completes successfully, the external provisioner creates a `PersistentVolume` object to represent the volume using the information returned in the `CreateVolume` response. The topology of the returned volume is translated to the `PersistentVolume` `NodeAffinity` field. The `PersistentVolume` object is then bound to the `PersistentVolumeClaim` and available for use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Spacing around AllowedTopologies
@@ -116,7 +114,7 @@ Provisioning and deletion operations are handled using the existing [external pr | |||
|
|||
In short, to dynamically provision a new CSI volume, a cluster admin would create a `StorageClass` with the provisioner corresponding to the name of the external provisioner handling provisioning requests on behalf of the CSI volume driver. | |||
|
|||
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). Once the operation completes successfully, the external provisioner creates a `PersistentVolume` object to represent the volume using the information returned in the `CreateVolume` response. The `PersistentVolume` object is bound to the `PersistentVolumeClaim` and available for use. | |||
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). If the `PersistentVolumeClaim` has the `selectedNode` annotation set (TODO verult update to actual annotation name) (only added if delayed volume binding is enabled in the `StorageClass`), the provisioner will get relevant topology labels from the corresponding `Node` and pass them to the `CreateVolume` call as preferred topology. `AllowedTopologies`from the `StorageClass` is passed through as permitted topology. Once the operation completes successfully, the external provisioner creates a `PersistentVolume` object to represent the volume using the information returned in the `CreateVolume` response. The topology of the returned volume is translated to the `PersistentVolume` `NodeAffinity` field. The `PersistentVolume` object is then bound to the `PersistentVolumeClaim` and available for use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you describe this in more detail: The topology of the returned volume is translated to the
PersistentVolume
NodeAffinity field
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The topology format is ultimately controlled by the user, so the same applies to the translation. I'll clarify the steps for the recommended provisioner, though.
* The format of each key/value pair must match those in `PersistentVolume` and `StorageClass` objects. When a `StorageClass` has delayed volume binding enabled, the scheduler uses the topology information of a `Node` in the following ways: | ||
1. During dynamic provisioning, the scheduler selects a candidate node for the provisioner by comparing each `Node`'s topology with the `AllowedTopology` in the `StorageClass`. (TODO verult Link to volume scheduling design doc) | ||
1. During volume binding and pod scheduling, the scheduler selects a candidate node for the pod by comparing `Node` topology with `VolumeNodeAffinity` in `PersistentVolume`s. (TODO verult Link to volume scheduling design doc) | ||
* Must avoid collision with topology specified from sources other than CSI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Must avoid collision with labels specified from...?
Requirements: | ||
* Must adhere to the [label format](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set). | ||
* Must support different drivers on the same node. | ||
* The format of each key/value pair must match those in `PersistentVolume` and `StorageClass` objects. When a `StorageClass` has delayed volume binding enabled, the scheduler uses the topology information of a `Node` in the following ways: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The format of each key/value pair must match those in
PersistentVolume
andStorageClass
objects.
What does this mean?
1. During volume binding and pod scheduling, the scheduler selects a candidate node for the pod by comparing `Node` topology with `VolumeNodeAffinity` in `PersistentVolume`s. (TODO verult Link to volume scheduling design doc) | ||
* Must avoid collision with topology specified from sources other than CSI. | ||
|
||
Proposal: `"csi.kubernetes.io.csi-driver.example.com/rack": "rack1"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dash for delimiter is allowed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, but not underscore
c195c33
to
48cfac9
Compare
48cfac9
to
2290b32
Compare
Addressed comments in new commit /cc @msau42 |
/assign |
Added more discussion around PV NodeAffinity |
0068e55
to
285fd99
Compare
285fd99
to
5470ce3
Compare
…validation and default permitted topology
5470ce3
to
547f14b
Compare
Add CSINodeInfo object
16b8cae
to
f31ea27
Compare
f31ea27
to
74b121f
Compare
/cc @thockin |
``` | ||
* If the `NodeGetInfo` call fails, kubelet must delete any previous NodeID for this driver. | ||
* When kubelet plugin unregistration mechanism is implemented, delete NodeID and topology keys when a driver is unregistered. | ||
This annotation is deprecated and will be removed according to deprecation policy (1 year after deprecation). TODO mark deprecation date. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move this deprecation notice to the beginning of this bullet
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). Once the operation completes successfully, the external provisioner creates a `PersistentVolume` object to represent the volume using the information returned in the `CreateVolume` response. The `PersistentVolume` object is bound to the `PersistentVolumeClaim` and available for use. | ||
To provision a new CSI volume, an end user would create a `PersistentVolumeClaim` object referencing this `StorageClass`. The external provisioner will react to the creation of the PVC and issue the `CreateVolume` call against the CSI volume driver to provision the volume. The `CreateVolume` name will be auto-generated as it is for other dynamically provisioned volumes. The `CreateVolume` capacity will be taken from the `PersistentVolumeClaim` object. The `CreateVolume` parameters will be passed through from the `StorageClass` parameters (opaque to Kubernetes). | ||
|
||
If the `PersistentVolumeClaim` has the `volume.alpha.kubernetes.io/selected-node` annotation set (only added if delayed volume binding is enabled in the `StorageClass`), the provisioner will get relevant topology keys from the corresponding `CSINodeInfo` instance and the topology values from `Node` labels and use them to generate preferred topology in the `CreateVolume()` request. If the annotation is unset, preferred topology will not be specified. `AllowedTopologies` from the `StorageClass` is passed through as requisite topology. If `AllowedTopologies` is unspecified, the provisioner will pass in a set of aggregated topology values across the whole cluster as requisite topology. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preferred topology should always be set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the PVC has immediate binding and doesn't follow a StatefulSet naming scheme (or the external provisioner has StatefulSet spreading logic disabled), preferred topology will not be set
|
||
If the `PersistentVolumeClaim` has the `volume.alpha.kubernetes.io/selected-node` annotation set (only added if delayed volume binding is enabled in the `StorageClass`), the provisioner will get relevant topology keys from the corresponding `CSINodeInfo` instance and the topology values from `Node` labels and use them to generate preferred topology in the `CreateVolume()` request. If the annotation is unset, preferred topology will not be specified. `AllowedTopologies` from the `StorageClass` is passed through as requisite topology. If `AllowedTopologies` is unspecified, the provisioner will pass in a set of aggregated topology values across the whole cluster as requisite topology. | ||
|
||
To perform this topology aggregation, the external provisioner will cache all existing Node objects. In order to prevent a compromised node from affecting the provisioning process, it will pick a single node as the source of truth for keys, instead of relying on keys stored in `CSINodeInfo` for each node object. For PVCs to be provisioned with late binding, the selected node is the source of truth; otherwise a random node is picked. The provisioner will then iterate through all cached nodes, aggregating labels using those keys. Note that users are highly recommended to use late binding for volumes that involve topology, but if they choose not to, they must ensure topology keys are the same across the cluster. Otherwise they may see undefined behavior with regards to topology aggregation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify here that we only aggregate topologies for Nodes that have the same topology key "schema" as the selected or randomly selected node?
I don't think it's a requirement that all topology keys must be the same across all nodes.
|
||
To generate preferred topology, the external provisioner will generate N segments for preferred topology in the `CreateVolume()` call, where N is the size of requisite topology. Multiple segments are included to support volumes that are available across multiple topological segments. The topology segment from the selected node will always be the first in preferred topology. All other segments are some reordering of remaining requisite topologies such that given a requisite topology (or any arbitrary reordering of it) and a selected node, the set of preferred topology is guaranteed to always be the same. | ||
|
||
If immediate volume binding mode is set and the PVC follows StatefulSet naming format, then the provisioner will choose, as the first segment in preferred topology, a segment from requisite topology based on the PVC name that ensures an even spread of topology across the StatefulSet's volumes. The logic will be similar to the name hashing logic inside the GCE Persistent Disk provisioner. Other segments in preferred topology are obtained the same way as described above. This feature will be flag-gated in the external provisioner provided as part of the recommended deployment method. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"are obtained" => "are ordered"
* Once NodeRestriction is moved to the newer model (see [here](https://github.com/kubernetes/community/pull/911) for context), for each new label prefix introduced in a new driver, the cluster admin has to configure NodeRestrictions to allow the driver to update labels with the prefix. Cluster installations could include certain prefixes for pre-installed drivers by default. This is less convenient compared to the alternative, which can allow editing of all CSI drivers by default using the “csi.kubernetes.io” prefix, but often times cluster admins have to whitelist those prefixes anyway (for example ‘cloud.google.com’) | ||
|
||
Considerations: | ||
* Upon driver deletion/upgrade/downgrade, stale labels will be left untouched. It’s difficult for the driver to decide whether other sources rely on this label. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
especially in situations where the label is shared with components outside of csi
|
||
Considerations: | ||
* Upon driver deletion/upgrade/downgrade, stale labels will be left untouched. It’s difficult for the driver to decide whether other sources rely on this label. | ||
* During driver installation/upgrade/downgrade, controller deployment must be brought down before node deployment, and node deployment must be deployed before the controller deployment, because provisioning relies on up-to-date node information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was an issue when we were going for the "all nodes must have the same keys" design. But is it still an issue now that we can handle nodes with different keys?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't think of an issue immediately. Will update this line to discuss why this isn't an issue and what's the behavior if both deployments are changed in parallel. I have a comment in the Upgrades & Downgrades sections about recommending users to still upgrade those deployments independently, and I'll leave that there just to be safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One potential issue: if only topology values change while keys remain the same, and if AllowedTopologies is not specified, requisite topology will contain both old and new topology values, and CSI driver may fail the CreateVolume()
call. Given that CSI driver should be backward compatible, this is more of an issue when a node rolling upgrade happens before the controller update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think considering that topology labels can be shared with other components, we cannot override existing topology values if the key already exists on the node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think users should be free to do so if they can guarantee other components can handle new values too. Maybe we could call this out as a caveat when performing driver upgrades.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer in that case to have the user delete the label manually first instead of us just blindly overriding it and potentially messing up other components.
Considerations: | ||
* Upon driver deletion/upgrade/downgrade, stale labels will be left untouched. It’s difficult for the driver to decide whether other sources rely on this label. | ||
* During driver installation/upgrade/downgrade, controller deployment must be brought down before node deployment, and node deployment must be deployed before the controller deployment, because provisioning relies on up-to-date node information. | ||
* Topology keys inside `CSINodeInfo` must reflect the topology keys from drivers currently installed on the node. If no driver is installed, the collection must be empty. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't seem like the driver uninstall/crash case is being handled by this design. Plus there's always going to be a race condition of the CSINodeInfo being updated to remove uninstalled drivers, so I think the external-provisioner could will need to handle this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A similar race problem arises as well with node rolling upgrades - we could end up with a requisite topology that doesn't contain the selected node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Synced with @msau42 offline. Like she said, there is always going to be a chance the provisioner has out-of-date topology information on the node, but if the provisioner acts on that case, we are OK - it's not any worse than how CSI works today, since volumes could totally be provisioned in a volume where it's not accessible.
There's a few things we could do to minimize this possibility as much as possible. For example, if the driver info could be deleted from CSINodeInfo
as soon as possible after a node driver deletion or crash, the provisioner could do the following:
- Never use that node as the source of truth for topology keys.
- Don't include that node in the aggregated requisite topology.
In addition, the scheduler could check CSINodeInfo
and invalidate nodes that don't have the appropriate driver information as potential scheduling candidates.
Will update the doc
…rations; statefulset volume spreading
b297db9
to
7302f0a
Compare
Addressed comments |
I don't want to block this, but I do want to point out that we've discussed moving CSI drivers into sandboxes. This would yield {driver x runtime} tuples, rather than {drixer x node}. Again, too early to do anything about it, but maybe worth keeping in mind. See storage attack surfaces for more details. |
|
||
Considerations: | ||
* Upon driver deletion/upgrade/downgrade, stale labels will be left untouched. It’s difficult for the driver to decide whether other components outside CSI rely on this label. | ||
* During driver installation/upgrade/downgrade, controller deployment must be brought down before node deployment, and node deployment must be deployed before the controller deployment, because provisioning relies on up-to-date node information. One possible issue is if only topology values change while keys remain the same, and if AllowedTopologies is not specified, requisite topology will contain both old and new topology values, and CSI driver may fail the CreateVolume() call. Given that CSI driver should be backward compatible, this is more of an issue when a node rolling upgrade happens before the controller update. It's not an issue if keys are changed as well since requisite and preferred topology generation handles it appropriately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this an issue if we don't override the label values? I would say changing the value of an existing label is an incompatible change. If a driver really needs to introduce a new value, then it should be through a new key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK, will update this paragraph and also indicate that a driver will be rejected by the node if there is a topology value conflict
Automatic merge from submit-queue (batch tested with PRs 64283, 67910, 67803, 68100). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. CSI Cluster Registry and Node Info CRDs **What this PR does / why we need it**: Introduces the new `CSIDriver` and `CSINodeInfo` API Object as proposed in kubernetes/community#2514 and kubernetes/community#2034 **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes kubernetes/enhancements#594 **Special notes for your reviewer**: Per the discussion in https://groups.google.com/d/msg/kubernetes-sig-storage-wg-csi/x5CchIP9qiI/D_TyOrn2CwAJ the API is being added to the staging directory of the `kubernetes/kubernetes` repo because the consumers will be attach/detach controller and possibly kubelet, but it will be installed as a CRD (because we want to move in the direction where the API server is Kubernetes agnostic, and all Kubernetes specific types are installed). **Release note**: ```release-note Introduce CSI Cluster Registration mechanism to ease CSI plugin discovery and allow CSI drivers to customize Kubernetes' interaction with them. ``` CC @jsafrane
Automatic merge from submit-queue (batch tested with PRs 64283, 67910, 67803, 68100). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. CSI Cluster Registry and Node Info CRDs **What this PR does / why we need it**: Introduces the new `CSIDriver` and `CSINodeInfo` API Object as proposed in kubernetes/community#2514 and kubernetes/community#2034 **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes kubernetes/enhancements#594 **Special notes for your reviewer**: Per the discussion in https://groups.google.com/d/msg/kubernetes-sig-storage-wg-csi/x5CchIP9qiI/D_TyOrn2CwAJ the API is being added to the staging directory of the `kubernetes/kubernetes` repo because the consumers will be attach/detach controller and possibly kubelet, but it will be installed as a CRD (because we want to move in the direction where the API server is Kubernetes agnostic, and all Kubernetes specific types are installed). **Release note**: ```release-note Introduce CSI Cluster Registration mechanism to ease CSI plugin discovery and allow CSI drivers to customize Kubernetes' interaction with them. ``` CC @jsafrane Kubernetes-commit: 85300f4f5dd7b0bd36d0538fb6c3255c06d5e6c2
Automatic merge from submit-queue (batch tested with PRs 64283, 67910, 67803, 68100). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. CSI Cluster Registry and Node Info CRDs **What this PR does / why we need it**: Introduces the new `CSIDriver` and `CSINodeInfo` API Object as proposed in kubernetes/community#2514 and kubernetes/community#2034 **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes kubernetes/enhancements#594 **Special notes for your reviewer**: Per the discussion in https://groups.google.com/d/msg/kubernetes-sig-storage-wg-csi/x5CchIP9qiI/D_TyOrn2CwAJ the API is being added to the staging directory of the `kubernetes/kubernetes` repo because the consumers will be attach/detach controller and possibly kubelet, but it will be installed as a CRD (because we want to move in the direction where the API server is Kubernetes agnostic, and all Kubernetes specific types are installed). **Release note**: ```release-note Introduce CSI Cluster Registration mechanism to ease CSI plugin discovery and allow CSI drivers to customize Kubernetes' interaction with them. ``` CC @jsafrane Kubernetes-commit: 85300f4f5dd7b0bd36d0538fb6c3255c06d5e6c2
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. CSI Node info registration in kubelet **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes #67683 **Special notes for your reviewer**: Feature issue: kubernetes/enhancements#557 Design doc: kubernetes/community#2034 Missing pieces: * CSI client retry and exponential backoff logic. * CSINodeInfo object validation * e2e test with all the CSI machinery. An RBAC rule is also added to support external-provisioner topology updates. **Release note**: ```release-note Registers volume topology information reported by a node-level Container Storage Interface (CSI) driver. This enables Kubernetes support of CSI topology mechanisms. ```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. CSI Node info registration in kubelet **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes #67683 **Special notes for your reviewer**: Feature issue: kubernetes/enhancements#557 Design doc: kubernetes/community#2034 Missing pieces: * CSI client retry and exponential backoff logic. * CSINodeInfo object validation * e2e test with all the CSI machinery. An RBAC rule is also added to support external-provisioner topology updates. **Release note**: ```release-note Registers volume topology information reported by a node-level Container Storage Interface (CSI) driver. This enables Kubernetes support of CSI topology mechanisms. ``` Kubernetes-commit: f26556cc14e2a01a1904805566e082484c1f33f9
This was implemented in kubernetes-csi/external-provisioner#141 /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: saad-ali The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Topology updates to Kubernetes CSI implementation
This proposal depends on the following design changes:
Topology-aware dynamic provisioning design: #1857
CSI spec modifications related to topology: container-storage-interface/spec#188
/sig storage
/assign @saad-ali
/cc @vladimirvivien