Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUILD-381: require CSI Volumes using the Shared Resources driver to be specified as readOnly == true #84

Merged
merged 5 commits into from Jan 18, 2022

Conversation

gabemontero
Copy link
Contributor

Also fixed a doc update that was missed when I changed the controller watch to listen to the namespaces that are referenced by shares instead of listening to all namespaces but filter out some openshift system namespaces.

/assign @coreydaley
a lot of deleted code
I would also use ?w=1 at the end of the git URL used during code review in your browser to filter down white space differences

for PR review

@akram FYI - yeah a fair amount of e2e reduction was needed since we can no longer support shares mounted off other shares since we require read only to be true

@gabemontero
Copy link
Contributor Author

@gabemontero
Copy link
Contributor Author

/assign @rolfedh

for docs approve label
I previously updated https://docs.google.com/document/d/1F3VFkiLUJbsUkFJcjnA2oyROYTI5RRlTJYl4ZWQoNls/edit#heading=h.zdumv4gow9cw
to account for these changes

@gabemontero
Copy link
Contributor Author

/assign @prietyc123

for qe approved label

@gabemontero
Copy link
Contributor Author

/label px-approved

@openshift-ci openshift-ci bot added the px-approved Signifies that Product Support has signed off on this PR label Jan 14, 2022
@gabemontero
Copy link
Contributor Author

by code review URL I mean https://github.com/openshift/csi-driver-shared-resource/pull/84/files?w=1 @coreydaley

or something similar for each commit if you want to break it up that way @coreydaley

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 14, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: gabemontero

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 14, 2022
@rolfedh
Copy link

rolfedh commented Jan 14, 2022

/label docs-approved

@openshift-ci openshift-ci bot added the docs-approved Signifies that Docs has signed off on this PR label Jan 14, 2022
@gabemontero
Copy link
Contributor Author

more aws pain

/retest

@prietyc123
Copy link
Contributor

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Jan 15, 2022
in the system will have their content corresponding to those `Secrets` and `ConfigMaps`.

So while the driver requires all volume references in the various `Pods` are read only, so that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we rephrase this one to be more understable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep part of next push

docs/csi.md Outdated
- Also, mounting of one `SharedConfigMap` OR `SharedSecret` off of a subdirectory of another `SharedConfigMap` OR `SharedSecret` is only supported with read-write `Volumes`.
- the `ReadOnly` field is required to be set to 'true'. This follows conventions introduced in upstream Kubernetes CSI Drivers to facilitate proper SELinux labelling. What occurs is that
this driver will return a read-write linux file system to the kubelet, so that CRI-O can apply the correct SELinux labels on the file system (CRI-O would not be able to update the SELinux labels on a read only file system
after it is created), but the kubelet still makes sure that the file system later exposed to the consuming (which sits on top of the file system returned by this repository's driver) is read only.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/consuming /consuming pod/

@@ -95,318 +95,282 @@ func TestCreateHostPathVolumeBadAccessType(t *testing.T) {
}

func TestCreateDeleteConfigMap(t *testing.T) {
readOnly := []bool{true, false}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in my attempt to change these tests, I was wondering if it wouldn't be interresting to keep this for loop with 2 attempts; so we are sure that a creation then a deletion then a re-creation works well.
That would change the for loop with an i++ counter instead, but that would keep the test valid and valuable; wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry no, I don't think such a permutation has value for any of these tests. The volume won't even be provisioned when the kubelet asks for it if readOnly is not set to true with these changes. That is the requirement we have had to establish, the pattern from upstream we are following, in order to get the SELinux labels correct.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not just the one unit test which makes sure we fail the provision if readOnly is not set to true.

@@ -50,6 +50,8 @@ type hostPath struct {
ephemeral bool
maxVolumesPerNode int64

testSoNoMounter bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we mock/stub this interface instead of adding this parameter especially to a public method?
Also, by reading the tests comments, it seems that we will be needing to have sudo privilege to run the tests; which is a bit complex.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I took a short cut of not creating a mock mounter for unit tests.

I'll switch to creating a mock mounter and remove this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi k8s over at https://github.com/kubernetes/utils/tree/master/mount does not have a "fake" mounter

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and yes with all this we are "avoiding" the need for sudo priv's that are required to call the "real" mounter

Copy link
Contributor

@akram akram Jan 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am glad to see that now we have something more elegant than k8s folks 😄

klog.V(2).Infof("pruner: issue unmounting for volume %s mount id %s: %s", hpv.GetVolID(), hpv.GetVolPathAnchorDir(), err.Error())
} else {
klog.V(2).Infof("pruner: successfully unmounted volume %s mount id %s", hpv.GetVolID(), hpv.GetVolPathAnchorDir())
if mounter != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this is only needed when we are in unit tests?
The mounter here is only instantiated if we are in test mode. And so is the clean-up. I would suggest that we add this logic only in unit tests methods only.

err = mounter.Unmount(hpv.GetVolPathAnchorDir())

we only need a reference to the hpv and we could create a mounter in the unit test.
wdyt ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see my comment above ... I'll create a mock/test version of the mount interface

@gabemontero
Copy link
Contributor Author

ok thanks for the review @akram I've either responded in comments or made change in a separate commit for this review from you.... PTAL ... I'll squash commits when we reach a steady state with your review

@akram
Copy link
Contributor

akram commented Jan 18, 2022

That looks good now @gabemontero ! You can squash then, so I can lgtm that.

@gabemontero
Copy link
Contributor Author

That looks good now @gabemontero ! You can squash then, so I can lgtm that.

thanks @akram

the akram-1 commit has been squashed into the code/doc/unit/e2e commits appropriately

@gabemontero
Copy link
Contributor Author

bump @prietyc123

for qe approved label

the scenario here is to confirm that it the pod trying to mount the shared resource volume does not set the volumeAttribute readOnly to true, it gets a failed to mount error now, as we require that attribute to be set

@akram
Copy link
Contributor

akram commented Jan 18, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2022
`SharedConfigMaps`. Then, when the actual `Secrets` or `ConfigMaps` referenced by those
`SharedSecrets` and `SharedConfigMaps` change, each volume across all the active `Pods`
in the system will have their content corresponding to those `Secrets` and `ConfigMaps`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
in the system will have their content corresponding to those `Secrets` and `ConfigMaps`.
in the system will have their content corresponding to those `Secrets` and `ConfigMaps` updated.

with the belief that if they want resource refreshing disabled, it should be disabled for everyone.

For disabling at the volume specific level, the `volumeAttributes` field in should have an entry with the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For disabling at the volume specific level, the `volumeAttributes` field in should have an entry with the
For disabling at the volume specific level, the `volumeAttributes` field should have an entry with the

docs/csi.md Outdated
this driver will return a read-write linux file system to the kubelet, so that CRI-O can apply the correct SELinux labels on the file system (CRI-O would not be able to update the SELinux labels on a read only file system
after it is created), but the kubelet still makes sure that the file system later exposed to the consuming pod (which sits on top of the file system returned by this repository's driver) is read only.
If this driver allowed both read-only and read-write, there is in fact no way to provide differing support that still allows for correct SELinux labelling for each).
- Also, mounting of one `SharedConfigMap` OR `SharedSecret` off of a subdirectory of another `SharedConfigMap` OR `SharedSecret` is *NOT* supported the driver only supports read-only `Volumes`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Also, mounting of one `SharedConfigMap` OR `SharedSecret` off of a subdirectory of another `SharedConfigMap` OR `SharedSecret` is *NOT* supported the driver only supports read-only `Volumes`.
- Also, mounting of one `SharedConfigMap` OR `SharedSecret` off of a subdirectory of another `SharedConfigMap` OR `SharedSecret` is *NOT* supported. The driver only supports read-only `Volumes`.

if hp.mounter != nil {
err = hp.mounter.Unmount(hpv.GetVolPathAnchorDir())
if err != nil {
klog.V(2).Infof("pruner: issue unmounting for volume %s mount id %s: %s", hpv.GetVolID(), hpv.GetVolPathAnchorDir(), err.Error())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this should be a warning, not an info.

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2022
@akram
Copy link
Contributor

akram commented Jan 18, 2022

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 18, 2022
@gabemontero
Copy link
Contributor Author

updates made @coreydaley - went ahead and squashed to existing code/doc commits given degree of change

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 18, 2022
@coreydaley
Copy link
Member

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 18, 2022
@gabemontero
Copy link
Contributor Author

install pain no refresh e2e

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 18, 2022

@gabemontero: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit a19732f into openshift:master Jan 18, 2022
@gabemontero gabemontero deleted the require-read-only branch January 18, 2022 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. docs-approved Signifies that Docs has signed off on this PR lgtm Indicates that a PR is ready to be merged. px-approved Signifies that Product Support has signed off on this PR qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants