New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add keyring parameter in Ceph RBD provisioner #58287

Merged
merged 1 commit into from Feb 7, 2018

Conversation

Projects
None yet
7 participants
@madddi
Contributor

madddi commented Jan 15, 2018

What this PR does / why we need it:
userSecretName is a mandatory parameter for the kubernetes.io/rbd storage provisioner. Ceph RBD persistent volumes however don't need the secretRef. If no secret is given, a local keyring file (/etc/ceph/keyring) on the node will be used. This behaviour should also work with the storage provisioner. If no userSecretName parameter is given, no secretRef should be created in the resulting persistent volume.

Special notes for your reviewer:

Release note:

Add "keyring" parameter for Ceph RBD provisioner
@jsafrane

This comment has been minimized.

Member

jsafrane commented Jan 15, 2018

/ok-to-test

@rootfs

This comment has been minimized.

Member

rootfs commented Jan 17, 2018

The code works but I am not convinced this is the right way to do. Deploying keyring is very platform and admin dependent.

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 17, 2018

@rootfs The usage of keyring files is already supported with Ceph RBD PersistentVolumes. All this change does is making the configuration options of the StorageClass consistent with the PersistentVolumes. Deploying a keyring file is optional and the old way will continue to work.

Deploying keyring files on the nodes allow platform operators to use RBD volumes without making the Ceph secret visible to the platform users. It also makes it possible to restrict the usage of RBD volumes to specific nodes by deploying the keys to just these hosts.

@rootfs

This comment has been minimized.

Member

rootfs commented Jan 18, 2018

@madddi for that use case, the provisioner should be able to accept parameter for keyring. Make that new param and add a log message to print that the keyring is used for the PV.

@madddi madddi changed the title from make Ceph user secret optional in RBD provisioner to Add keyring parameter in Ceph RBD provisioner Jan 18, 2018

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 18, 2018

@rootfs Sounds good, I updated my commit.

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 18, 2018

/retest

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 18, 2018

The test seems to fail due to #58387

@@ -609,6 +610,8 @@ func (r *rbdVolumeProvisioner) Provision() (*v1.PersistentVolume, error) {
secretName = v
case "usersecretnamespace":
secretNamespace = v
case "keyring":
keyring = v

This comment has been minimized.

@rootfs

rootfs Jan 19, 2018

Member

validate this is a absolute file path

This comment has been minimized.

@madddi

madddi Jan 21, 2018

Contributor

I'm having trouble finding a clean solution for this. Path formats are os dependent and I can't find a package in the standard library providing the necessary functionality. Are there any examples in the Kubernetes code?

This comment has been minimized.

@rootfs

rootfs Jan 22, 2018

Member

keyring path is on linux. A linux file path regex will do.

This validation should go rbd volume and persistent volume source validation functions

Admittedly, this is not the fault of this PR, it is something we should have done earlier.

This comment has been minimized.

@madddi

madddi Jan 22, 2018

Contributor

OK, thanks. I'm going to take a look at the validation package and try to add the necessary check.

This comment has been minimized.

@madddi

madddi Jan 22, 2018

Contributor

I've added a simple regex validation and a corresponding test.

@k8s-ci-robot k8s-ci-robot added size/M and removed size/S labels Jan 22, 2018

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 22, 2018

Test failures seem to be related to #58578

@@ -75,6 +75,8 @@ var iscsiInitiatorIqnRegex = regexp.MustCompile(`iqn\.\d{4}-\d{2}\.([[:alnum:]-.
var iscsiInitiatorEuiRegex = regexp.MustCompile(`^eui.[[:alnum:]]{16}$`)
var iscsiInitiatorNaaRegex = regexp.MustCompile(`^naa.[[:alnum:]]{32}$`)
var filePathRegex = regexp.MustCompile(`^(?:/[^/]+)+$`)

This comment has been minimized.

@rootfs

rootfs Jan 22, 2018

Member

exclude some special characters, try "^(?:/[^/!;` ]+)+$" and add a test case e.g. /etc/ceph/keyring;/etc/foo/bar

@rootfs

This comment has been minimized.

Member

rootfs commented Jan 22, 2018

thanks, one more change request.

@rootfs

This comment has been minimized.

Member

rootfs commented Jan 22, 2018

/approve no-issue

@madddi

This comment has been minimized.

Contributor

madddi commented Jan 23, 2018

@rootfs I've updated the validation regex and added the test.

/assign @smarterclayton

@rootfs

This comment has been minimized.

Member

rootfs commented Jan 23, 2018

/lgtm
@liggitt can you approve? thanks

@k8s-ci-robot k8s-ci-robot added the lgtm label Jan 23, 2018

@@ -1201,6 +1207,9 @@ func validateRBDPersistentVolumeSource(rbd *core.RBDPersistentVolumeSource, fldP
if len(rbd.RBDImage) == 0 {
allErrs = append(allErrs, field.Required(fldPath.Child("image"), ""))
}
if rbd.Keyring != "" && !filePathRegex.MatchString(rbd.Keyring) {

This comment has been minimized.

@liggitt

liggitt Jan 26, 2018

Member

this is an existing field... can't tighten validation on it now

This comment has been minimized.

@madddi

madddi Jan 26, 2018

Contributor

@rootfs Asked for validation of the API field because there should have been a validation in the first place. Ist this something you guys want to discuss? I'm fine with either way and can move the validation to the provisioner.

FWIW: I think API field validation is the better way as it is more clear for the user. Storage provisioner errors only show up in the master logs.

This comment has been minimized.

@rootfs

rootfs Jan 29, 2018

Member

@liggitt validating keyring is something should have been done but not addressed (fault is mine).

This comment has been minimized.

@liggitt

liggitt Jan 29, 2018

Member

there should have been a validation in the first place. Ist this something you guys want to discuss? I'm fine with either way and can move the validation to the provisioner.

FWIW: I think API field validation is the better way as it is more clear for the user

I don't disagree, but tightening validation on existing data can break upgrade/migration and should only been done when absolutely required.

@liggitt validating keyring is something should have been done but not addressed

Understood, but on a shipped field it is generally too late

This comment has been minimized.

@madddi

madddi Jan 30, 2018

Contributor

@liggitt I can fully understand why you wouldn't want to change the behaviour of an existing API field. The validation however is not very strict - any existing setup which would violate the field validation wouldn't work in the first place. So introducing the validation should not have any effect on clusters whith a working configuration. On top of this, the validation is much easier to test when done in the API.

If I understand you correctly though, changing stable API fields is simply not up for discussion for policy reasons. If this is the case and @rootfs agrees, I'll move the validation to the provisioner.

This comment has been minimized.

@liggitt

liggitt Jan 31, 2018

Member

If I understand you correctly though, changing stable API fields is simply not up for discussion for policy reasons

In general, that's the policy. Tightening validation takes a local problem (a particular volume wouldn't work) and makes it a global problem on an upgrade (migration fails, and the offending object cannot be deleted). There are exceptions to tightening existing validation, but the bar is generally higher than "we want to fail earlier for usability"

This comment has been minimized.

@madddi

madddi Jan 31, 2018

Contributor

One could argue that using an unvalidated keyring path with exec.Run() is potentially insecure. PVs should only be managed by cluster admins, but still.

@rootfs If this is fine for you, too, I'll move the validation to the provisioner.

This comment has been minimized.

@rootfs

rootfs Feb 1, 2018

Member

@madddi ok, let's defer validation to rbd volume. Thanks

@@ -1190,6 +1193,9 @@ func validateRBDVolumeSource(rbd *core.RBDVolumeSource, fldPath *field.Path) fie
if len(rbd.RBDImage) == 0 {
allErrs = append(allErrs, field.Required(fldPath.Child("image"), ""))
}
if rbd.Keyring != "" && !filePathRegex.MatchString(rbd.Keyring) {

This comment has been minimized.

@liggitt

liggitt Jan 26, 2018

Member

same here, tightening validation causes problems in the API. can validate it at point of use

@k8s-ci-robot k8s-ci-robot added size/S and removed size/M labels Feb 2, 2018

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Feb 2, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: madddi, rootfs

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-merge-robot k8s-merge-robot removed the lgtm label Feb 2, 2018

@madddi

This comment has been minimized.

Contributor

madddi commented Feb 2, 2018

@rootfs @liggitt I've moved the validation to the rbd package.

/unassign @liggitt @smarterclayton

rbd.SecretRef.Name = secretName
rbd.SecretRef.Namespace = secretNamespace
} else {
rbd.Keyring = keyring

This comment has been minimized.

@rootfs

rootfs Feb 5, 2018

Member

please move keyring validation here so we can still use secret if it is given.

@rootfs

This comment has been minimized.

Member

rootfs commented Feb 5, 2018

thanks @madddi
one last comment and it is ready to go.

@madddi

This comment has been minimized.

Contributor

madddi commented Feb 7, 2018

@rootfs Sorry, secrets should still work, of course :-) Fixed.

@rootfs

This comment has been minimized.

Member

rootfs commented Feb 7, 2018

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm label Feb 7, 2018

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Feb 7, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Feb 7, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-merge-robot k8s-merge-robot merged commit 117780b into kubernetes:master Feb 7, 2018

12 of 13 checks passed

Submit Queue Required Github CI test is not green: pull-kubernetes-e2e-gce
Details
cla/linuxfoundation madddi authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-unit Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment