Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add block volume support to internal provisioners. #64447

Merged
merged 2 commits into from Jun 1, 2018

Conversation

@jsafrane
Copy link
Member

jsafrane commented May 29, 2018

What this PR does / why we need it:
Internal provisioners now create filesystem PVs when block PVs are requested. This leads to unbindable PVCs.

In this PR, volume plugins that support block volumes provision block PVs when block is requested. All the other provisioners return clear error in kubectl describe pvc:

Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  7s (x2 over 18s)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": kubernetes.io/cinder does not support block volume provisioning

AWS EBS, Azure Disk, GCE PD and Ceph RBD volume plugins support dynamic provisioning of raw block volumes.

cc @kubernetes/vmware for vsphere changes
cc @andyzhangx for Azure changes

/assign @copejon @mtanino

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 29, 2018

e2e tests will follow

@jsafrane jsafrane force-pushed the jsafrane:block-provision branch from 728bd6f to 76e41dc May 29, 2018

@andyzhangx
Copy link
Member

andyzhangx left a comment

/lgtm
on azuredisk & azurefile part

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 29, 2018

Thanks for the quick review, however,
/lgtm cancel

I need the others reviewer too :-)

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 29, 2018

/approve
stupid bot!

@k8s-ci-robot k8s-ci-robot added approved size/L and removed size/M labels May 29, 2018

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 29, 2018

Added e2e test. I ran it manually on GCE and AWS. It will be run in alpha test suite(s), since it depends on alpha feature.

@childsb

This comment has been minimized.

Copy link
Member

childsb commented May 29, 2018

This seems like a feature, but its really a bug fix:

  1. we added block volume support for SOME of the plugins
  2. The plugins which we didn't explicitly get block volume support don't know that they didn't
  3. the provisioners for the volume types which dont have block volume support still provision regular volumes when the block volume flag is set in the storage class.

ideally these volume types wouldn't even provision volumes, but enforcing that may be more difficult then enabling them to provision block volumes.

would like this in as a fix for 1.11

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 29, 2018

/kind bug
/priority critical-urgent

@humblec

This comment has been minimized.

Copy link
Contributor

humblec commented May 29, 2018

"lgtm" from gluster side, Thanks Jan!

@erinboyd

This comment has been minimized.

Copy link

erinboyd commented May 29, 2018

@mkimuram @screeley44 @msau42 please review

@jsafrane jsafrane force-pushed the jsafrane:block-provision branch from a64f6cd to 408f62c May 30, 2018

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 30, 2018

/retest

@mkimuram

This comment has been minimized.

Copy link
Contributor

mkimuram commented May 30, 2018

I think that it's enough to have a check in Provision() call in each volume plugin. They're responsible to provision something that corresponds to given PVC, incl. block volume support.

I agree with you that it is enough for checking current internal provisioner.

Traditionally, we leave this to users / cluster admins. It's their responsibility to run their clusters with compatible components, i.e. with the provisioners that support block. And that's what are alpha/beta feature gates good for. I know it's not ideal, however, there is no direct call-like interface between Kubernetes and external provisioners.

However, IIUC, we are on the way to migrate provision logic for all plugins to CSI,
and CSI provisioner will not have a similar place to check whether each CSI driver supports block volume, because they share the same Provision() and call CreateVolume() for each driver inside it(*1)(*2).

If we don't check it, we won't be able to make filebase backend, like NFS and file mode of Gluster/Ceph, co-exist in blockVolume-enabled environment.
(Assuming that filebase backend can't support blockVolume, and blockVolume-enabled environment only allows blockVolume supported drivers without this kind of check.)

So, I think that we also need to find a way to check it in external provisioner.
(It doesn't need to be the same logic to internal provisioner, so we can separate this issue from this PR, though.)

(*1) provisionClaimOperation() for external provisioner.
https://github.com/kubernetes-incubator/external-storage/blob/master/lib/controller/controller.go#L887
(*2) Provision() call for CSI
https://github.com/kubernetes-csi/external-provisioner/blob/master/pkg/controller/controller.go#L280

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 30, 2018

For CSI we have external provisioner that will get updated with a particular Kubernetes release and will check what PVC wants and if the CSI driver supports it.

@msau42

This comment has been minimized.

Copy link
Member

msau42 commented May 30, 2018

For CSI we have external provisioner that will get updated with a particular Kubernetes release

Users could run older versions of the CSI driver though so we still need to handle the scenario that the external provisioner may ignore any new provisioning parameters

@screeley44

This comment has been minimized.

Copy link
Contributor

screeley44 commented May 30, 2018

aws and e2e LGTM

@mkimuram

This comment has been minimized.

Copy link
Contributor

mkimuram commented May 31, 2018

To discuss similar check for external provisioner, I made PR for it.

As described there, similar issue to CSI also happens in flex volume plugin.

@rootfs

This comment has been minimized.

Copy link
Member

rootfs commented May 31, 2018

/lgtm
/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented May 31, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andyzhangx, jsafrane, rootfs

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 31, 2018

I got reviews from all volume plugins (@rootfs for Cinder and RBD)

/sig storage

@jsafrane

This comment has been minimized.

Copy link
Member Author

jsafrane commented May 31, 2018

Action required: This pull request must have the status/approved-for-milestone label applied by a SIG maintainer.

@childsb @saad-ali, can you please make sure this is merged in 1.11?

@dims

This comment has been minimized.

Copy link
Member

dims commented Jun 1, 2018

/status approved-for-milestone

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 1, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@andyzhangx @copejon @jsafrane @mtanino @rootfs

Pull Request Labels
  • sig/storage: Pull Request will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help
@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 1, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 1, 2018

@jsafrane: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-integration 408f62c link /test pull-kubernetes-integration
pull-kubernetes-e2e-gce 408f62c link /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 1, 2018

Automatic merge from submit-queue (batch tested with PRs 63348, 63839, 63143, 64447, 64567). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 38d2dc7 into kubernetes:master Jun 1, 2018

15 of 18 checks passed

pull-kubernetes-e2e-gce Job failed.
Details
pull-kubernetes-integration Job failed.
Details
Submit Queue Required Github CI test is not green: pull-kubernetes-e2e-gce
Details
cla/linuxfoundation jsafrane authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.