Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSAN policy support for storage volume provisioning inside kubernetes #42974

Conversation

BaluDontu
Copy link
Contributor

@BaluDontu BaluDontu commented Mar 13, 2017

The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.

For example, User creates a storage class with VSAN storage capabilities:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
hostFailuresToTolerate: "2"
diskStripes: "1"
cacheReservation: "20"
datastore: VSANDatastore

The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.

When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.

@pdhamdhere @tthole @abrarshivani @divyenpatel

Release note:

vSphere cloud provider: Allow specifying VSAN Storage Capabilities during dynamic volume provisioning.

@k8s-ci-robot
Copy link
Contributor

Hi @BaluDontu. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with @k8s-bot ok to test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 13, 2017
@k8s-reviewable
Copy link

This change is Reviewable

@k8s-github-robot k8s-github-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. release-note-label-needed labels Mar 13, 2017
@BaluDontu
Copy link
Contributor Author

/approve

@BaluDontu BaluDontu force-pushed the VSANPolicyProvisioningForKubernetesOnKubernetesRepo branch from 42dd5c9 to f060456 Compare March 16, 2017 21:22
@jingxu97 jingxu97 requested a review from luomiao March 22, 2017 22:04
@jingxu97 jingxu97 assigned kerneltime and unassigned freehan Mar 22, 2017
@jingxu97
Copy link
Contributor

@BaluDontu Does this PR require a release note? If it does, please add one

@kerneltime
Copy link

@k8s-bot ok to test


The policy settings can be one or more of the following:

* hostFailuresToTolerate: represents NumberOfFailuresToTolerate

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rendering of this is not clear. Either quote it as fixed width or bold the text strings.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make it to italics.

```
[Download example](vsphere-volume-sc-vsancapabilities.yaml?raw=true)

Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be zeroedthick disk.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zeroedthick vs zeroedthick

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make it to italics.

@@ -66,6 +66,8 @@ const (
ZeroedThickDiskType = "zeroedThick"
VolDir = "kubevols"
RoundTripperDefaultCount = 3
DummyVMName = "kubernetes-helper-vm"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is a race condition possible if 2 volumes are being created for the same VM name? Should the name by dynamically generated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are keeping this Dummy VM around for all the create disk operations.
If 2 requests for createDisk come at same time, the VC would serialize these operations.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the second create disk succeed or fail since create vm fails?

Copy link
Contributor Author

@BaluDontu BaluDontu Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First time when VM create fails it will return back and kubernetes would try again that it when it finds existing dummyVM and will create a disk successfully.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I guess since the VM is not deleted this is true only for the first and second volume creation race.

// 2. Add a new disk to the VM by performing VM reconfigure.
// 3. Detach the new disk from the dummy VM.
if volumeOptions.StorageProfileData != "" {
// Check if the datastore is VSAN if any capability requirements are specified.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the code into it's own private function will keep attach disk function readable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just wanted to keep it there as it would easy to read all the steps if storageProfileData is not null.
The code seems to be minimum as well.

// Check if the DummyVM exists in kubernetes cluster folder.
// The kubernetes cluster folder - vs.cfg.Global.WorkingDir is where all the nodes in the kubernetes cluster are created.
vmRegex := vs.cfg.Global.WorkingDir + DummyVMName
dummyVM, err := f.VirtualMachine(ctx, vmRegex)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check the serialization logic within the controller manager. It might be possible to have 2 requests try to create the same VM. Using a new VM name that is specific for a POD might work around the serialization problem or we might have to check on error to see if VM is created.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If 2 requests come to create the same VM at exactly same time, the 1st operations would succeed and 2nd would fail. If 2nd fails, kubernetes attempts to try again and it would eventually find the already created VM and would create a volume on that disk.

All PVC creations are retried by kubernetes. It assumes as if these are recoverable errors.

Copy link
Contributor Author

@BaluDontu BaluDontu Mar 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is clarified to you after our discussion and hence not needed anymore.

@kerneltime
Copy link

Partial review done, will complete review once tests pass.

volumeOptions.DiskFormat = value
case "datastore":
volumeOptions.Datastore = value
case Policy_HostFailuresToTolerate:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cases enumerated here can be a bit hard to read, can these be moved into independent methods?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would creating so many methods for all the VSAN capabilities. This way it is easy to understand as for each case it would validate the VSAN capability and concatenate it to volumeOptions.StorageProfileData.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On further inspection this case structure and the one below can be combined. Instead of the second nested case being only for validation you can combine them into one validate and update string.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same opinion as Ritesh on this one.. The validation function just did another switch which you have already done here. Better to combine them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

case Policy_HostFailuresToTolerate:
if !validateVSANCapability(Policy_HostFailuresToTolerate, value) {
return "", 0, fmt.Errorf(`Invalid value for hostFailuresToTolerate in volume plugin %s.
The default value is 1, minimum value is 0 and maximum value is 3.`, v.plugin.GetPluginName())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The limits here and below should be constants that get put into the log.

Copy link
Contributor Author

@BaluDontu BaluDontu Mar 23, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

U meant the "The default value is 1, minimum value is 0 and maximum value is 3.",

"The value can be either 0 or 1",

"The minimum percentage is 0 and maximum percentage is 100."

and so on to be constants?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The values for the numbers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@kerneltime
Copy link

@k8s-bot non-cri node e2e test this

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 23, 2017
@BaluDontu BaluDontu force-pushed the VSANPolicyProvisioningForKubernetesOnKubernetesRepo branch 2 times, most recently from c97a300 to a0a4351 Compare March 23, 2017 22:48
@k8s-github-robot k8s-github-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 23, 2017

// Validate the capability requirement for the user specified policy attributes.
func validateVSANCapability(capabilityName string, capabilityValue string) bool {
switch strings.ToLower(capabilityName) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be cleaned up. The conversion can be done once and then the validation instead of doing the validation for each case which is duplication.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@@ -1378,3 +1636,49 @@ func makeDirectoryInDatastore(c *govmomi.Client, dc *object.Datacenter, path str

return err
}

// Get the folder

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update the comment or rename method to say that get the folder for a given VM

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@@ -1210,18 +1189,6 @@ func (vs *VSphere) CreateVolume(volumeOptions *VolumeOptions) (volumePath string
datastore = volumeOptions.Datastore
}

// Default diskformat as 'thin'
if volumeOptions.DiskFormat == "" {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not leave this at the top level to not replicate setting the default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

@BaluDontu BaluDontu force-pushed the VSANPolicyProvisioningForKubernetesOnKubernetesRepo branch from c372155 to dbe9483 Compare March 27, 2017 19:43
@luomiao
Copy link

luomiao commented Mar 27, 2017

/lgtm

1 similar comment
@kerneltime
Copy link

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 27, 2017
@k8s-github-robot k8s-github-robot added the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Mar 27, 2017
@BaluDontu
Copy link
Contributor Author

@thockin : I have received an lgtm from @kerneltime @luomiao . Can you please provide an lgtm if we are good to go. Thanks!

@k8s-github-robot k8s-github-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed release-note-label-needed labels Mar 27, 2017
@thockin
Copy link
Member

thockin commented Mar 27, 2017

/approve

@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: BaluDontu, kerneltime, luomiao, thockin

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 27, 2017
@thockin thockin removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Mar 27, 2017
@k8s-github-robot
Copy link

Automatic merge from submit-queue (batch tested with PRs 42835, 42974)

@k8s-github-robot k8s-github-robot merged commit 3843108 into kubernetes:master Mar 28, 2017
@jingxu97
Copy link
Contributor

/approve

@enisoc enisoc added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Apr 20, 2017
k8s-github-robot pushed a commit that referenced this pull request Apr 21, 2017
…42974-upstream-release-1.6

Automatic merge from submit-queue

Automated cherry pick of #41929 #42974 upstream release 1.6

Cherry pick of #41929 #42974 on release-1.6.

#41929: vSphere Cloud Provider: Fstype in storage class
#42974: VSAN policy support for storage volume provisioning inside kubernetes

@saad-ali
k8s-github-robot pushed a commit that referenced this pull request Apr 27, 2017
Automatic merge from submit-queue

e2e tests for VSAN policy support in Kubernetes for vSphere

Following e2e test cases have been implemented for VSAN policy support in Kubernetes for vSphere. These e2e tests are for PR #42974 which out for review.

A total of 8 test cases for below mentioned 5 different scenarios are implemented.

Test cases:

1. Validation of VSAN capabilities.
- hostFailuresToTolerate : Minimum 1 and Max 3. Should be integer.
- stripeWidth: Minimum is 1 and Maximum is 12. Should be integer.
- proportionalCapacity: Expressed in percentage. Should be between 0 and 100. Should be integer.
- iopsLimit: Should be greater than 0. Should be integer.

2. Use a VSAN testbed setup. Try valid VSAN capabilities which are supported by VSAN testbed. Make sure the disk is created with policy configured with it.
- Ex: Using hostFailuresToTolerate=0 and cacheReservation=12
Ex: diskStripes=1 and objectSpaceReservation=30

3. Use a VSAN testbed setup. Try valid VSAN capabilities which are not supported by VSAN testbed. Make sure that the disk creation fails and PVC status is pending.

4. Try using VSAN capabilities on a non-VSAN datastore. PVC status will be pending and it errors to the user saying to VSAN capabilities are not supported on a non-VSAN testbed.

5. Try all 1 to 5 with custom datastore specified by the user.

@jeffvance  @divyenpatel

**Release note**:

```release-note
None
```
k8s-github-robot pushed a commit that referenced this pull request May 23, 2017
Automatic merge from submit-queue

vSphere storage policy support for dynamic volume provisioning

Till now, vSphere cloud provider provides support to configure persistent volume with VSAN storage capabilities - #42974. Right now this only works with VSAN.

Also there might be other use cases:

- The user might need a way to configure a policy on other datastores like VMFS, NFS etc.
- Use Storage IO control, VMCrypt policies for a persistent disk.

We can achieve about 2 use cases by using existing storage policies which are already created on vCenter using the Storage Policy Based Management service. The user will specify the SPBM policy ID as part of dynamic provisioning 

- resultant persistent volume will have the policy configured with it. 
- The persistent volume will be created on the compatible datastore that satisfies the storage policy requirements. 
- If there are multiple compatible datastores, the datastore with the max free space would be chosen by default.
- If the user specifies the datastore along with the storage policy ID, the volume will created on this datastore if its compatible. In case if the user specified datastore is incompatible, it would error out the reasons for incompatibility to the user.
- Also, the user will be able to see the associations of persistent volume object with the policy on the vCenter once the volume is attached to the node.

For instance in the below example, the volume will created on a compatible datastore with max free space that satisfies the "Gold" storage policy requirements.

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
       name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
      diskformat: zeroedthick
      storagepolicyName: Gold
```

For instance in the below example, the vSphere CP checks if "VSANDatastore" is compatible with "Gold" storage policy requirements. If yes, volume will be provisioned on "VSANDatastore" else it will error that "VSANDatastore" is not compatible with the exact reason for failure.

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
       name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
      diskformat: zeroedthick
      storagepolicyName: Gold
      datastore: VSANDatastore
```

As a part of this change, 4 commits have been added to this PR.

1. Vendor changes for vmware/govmomi
2. Changes to the VsphereVirtualDiskVolumeSource in the Kubernetes API. Added 2 additional fields StoragePolicyName, StoragePolicyID
3. Swagger and Open spec API changes.
4. vSphere Cloud Provider changes to implement the storage policy support.

**Release note**:


```release-note
vSphere cloud provider: vSphere Storage policy Support for dynamic volume provisioning
```
@BaluDontu BaluDontu deleted the VSANPolicyProvisioningForKubernetesOnKubernetesRepo branch September 5, 2017 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet