Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(topology): adding support for custom topology keys #94

Merged
merged 4 commits into from
Apr 30, 2020

Conversation

pawanpraka1
Copy link
Contributor

@pawanpraka1 pawanpraka1 commented Apr 27, 2020

fixes :- #84

Now user can label the nodes with the required topology, the ZFSPV
driver will support all the node labels as topology keys.

We should label the nodes first and then deploy the driver to make it aware of
all the labels that node has. If we want to add labels after ZFS-LocalPV driver
has been deployed, a restart all the node agents are required so that the driver
can pick the labels and add them as supported topology keys.

Note that if storageclass is using Immediate binding mode and topology key is not mentioned then all the nodes should be labeled using same key, that means, same key should be present on all nodes, nodes can have different values for those keys. If nodes are labeled with different keys i.e.
some nodes are having different keys, then ZFSPV's default scheduler can not effictively
do the volume count based scheduling. In this case the CSI provisioner will pick keys from
any random node and then prepare the preferred topology list using the nodes which has those
keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.

Changed the csi-provisioner image to 1.6.0 as this has the upstream fix kubernetes-csi/external-provisioner#421.

Signed-off-by: Pawan pawan@mayadata.io

@pawanpraka1 pawanpraka1 added enhancement Add new functionality to existing feature pr/hold-merge hold the merge. pr/release-note PR has to be added in the release note. labels Apr 27, 2020
@pawanpraka1 pawanpraka1 added this to the v0.7.0 milestone Apr 27, 2020
@pawanpraka1 pawanpraka1 requested a review from kmova April 27, 2020 06:22
@codecov-io
Copy link

codecov-io commented Apr 27, 2020

Codecov Report

Merging #94 into master will decrease coverage by 0.67%.
The diff coverage is 0.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #94      +/-   ##
==========================================
- Coverage   23.57%   22.90%   -0.68%     
==========================================
  Files          14       14              
  Lines         475      489      +14     
==========================================
  Hits          112      112              
- Misses        362      376      +14     
  Partials        1        1              
Impacted Files Coverage Δ
pkg/client/k8s/v1alpha1/node.go 0.00% <0.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f65575e...f0a5e26. Read the comment docs.

@pawanpraka1 pawanpraka1 removed the pr/hold-merge hold the merge. label Apr 28, 2020
@pawanpraka1 pawanpraka1 added the Need community involvement Needs community involvement on some action item. label Apr 29, 2020
pawanpraka1 added a commit to pawanpraka1/zfs-localpv that referenced this pull request Apr 29, 2020
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.

Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR openebs#94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.

Signed-off-by: Pawan <pawan@mayadata.io>
pawanpraka1 added a commit to pawanpraka1/zfs-localpv that referenced this pull request Apr 29, 2020
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.

Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR openebs#94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.

This fixes the issue where the driver was trying to create the PV on the master node
as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid
candidate for provisioning the PV. After changing the key to unique name, since the driver
will not run on master node, so it will not set "openebs.io/nodename" label to this node
hence this node will never become a valid candidate for the provisioning the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
@kmova
Copy link
Member

kmova commented Apr 29, 2020

Changes look good.

Couple of doc items. Can the release notes be added as per the new guidelines and also, can the readme/examples updated on how to use this feature.

docs/faq.md Outdated Show resolved Hide resolved
docs/faq.md Outdated

```

Once we have labeled the node, we can install the zfs driver. The driver will pick the node labels and add that as the supported topology key. If the driver is already installed and you want to add a new topology information, you can label the node with the topology information and then restart of the nodes daemonset are required so that the driver can pick the labels and add them as supported topology keys. We should restart the pod in kube-system namespace with the name as openebs-zfs-node-[xxxxx] which is the node agent pod for the ZFS-LocalPV Driver.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n_: nodes daemonset -> zfs pv csi driver daemon set ( openebs-zfs-node)

Now user can label the nodes with the required topology, the ZFSPV
driver will support all the node labels as topology keys.

We should label the nodes first and then deploy the driver to make it aware of
all the labels that node has. If we want to add labels after ZFS-LocalPV driver
has been deployed, a restart all the node agents are required so that the driver
can pick the labels and add them as supported topology keys.

Note that if storageclass is using Immediate binding mode and topology key is not mentioned
then all the nodes should be labeled using same key, that means, same key should be present
on all nodes, nodes can have different values for those keys. If nodes are labeled with
different keys i.e. some nodes are having different keys, then ZFSPV's default scheduler can
not effictively do the volume count based scheduling. In this case the CSI provisioner will
pick keys from any random node and then prepare the preferred topology list using the nodes
which has those keys defined. And ZFSPV scheduler will schedule the PV among those nodes only.

Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
pawanpraka1 added a commit to pawanpraka1/zfs-localpv that referenced this pull request Apr 30, 2020
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.

Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR openebs#94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.

This fixes the issue where the driver was trying to create the PV on the master node
as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid
candidate for provisioning the PV. After changing the key to unique name, since the driver
will not run on master node, so it will not set "openebs.io/nodename" label to this node
hence this node will never become a valid candidate for the provisioning the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
Signed-off-by: Pawan <pawan@mayadata.io>
@kmova kmova merged commit de9b302 into openebs:master Apr 30, 2020
@pawanpraka1 pawanpraka1 linked an issue Apr 30, 2020 that may be closed by this pull request
kmova pushed a commit that referenced this pull request Apr 30, 2020
There are setups where nodename is different than the hostname.
The driver uses the nodename and tries to set the "kubernetes.io/hostname"
node label to the nodename. Which will fail if nodename is not same as
hostname. Here, changing the key to unique name so that the driver can set
that key as node label and also it can not modify/touch the existing node labels.

Now onwards, the driver will use "openebs.io/nodename" key to set the PV node affinity.
Old volumes will have "kubernetes.io/hostname" affinity, and they will also work as
after the PR #94, it supports all the node
labels as topology key and all the nodes have "kubernetes.io/hostname" label set. So
old volumes will work without any issue. Also for the same reason old stoarge classes
which are using "kubernetes.io/hostname" as topology key, will work as that key is supported.

This fixes the issue where the driver was trying to create the PV on the master node
as master node is having "kubernetes.io/hostname" label, so it is also becoming a valid
candidate for provisioning the PV. After changing the key to unique name, since the driver
will not run on master node, so it will not set "openebs.io/nodename" label to this node
hence this node will never become a valid candidate for the provisioning the volume.

Signed-off-by: Pawan <pawan@mayadata.io>
@pawanpraka1 pawanpraka1 deleted the topo branch April 30, 2020 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Add new functionality to existing feature Need community involvement Needs community involvement on some action item. pr/release-note PR has to be added in the release note.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

add support to have custom topology key for ZFSPV
3 participants