Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip unused volumes in VolumeManager #81163

Merged
merged 3 commits into from Aug 17, 2019

Conversation

@jsafrane
Copy link
Member

commented Aug 8, 2019

DesiredStateOfWorldPopulator should skip a volume that is not used in any pod. "Used" means either mounted (via volumeMounts) or used as raw block device (via volumeDevices) in a container in the pod.

Especially when block feature is disabled, a block volume must not get into DesiredStateOfWorld, because it would be formatted and mounted there, potentially overwriting any existing raw block data.

/kind bug
Fixes #76044

Does this PR introduce a user-facing change?:

Volumes specified in a pod but not used in it are no longer unnecessarily formatted, mounted and reported in `node.status.volumesInUse`.

Change of Kubernetes behavior
Previously, all volumes were reported in node.status.volumesInUse and they were mounted / mapped as raw block devices in kubelet. With this PR, only the volumes that are actually used will be reported there and mounted/mapped. In most cases this won't make a difference (why would anyone put a volume to a pod and then not use it in any container?)

cc: @wongma7 @msau42 @bertinatto

@wongma7

This comment has been minimized.

Copy link
Contributor

commented Aug 8, 2019

My old attempt to fix this looks extremely convoluted... What happens in kubelet-side WaitForAttachAndMount since it also reads spec.volumes?

for _, podVolume := range pod.Spec.Volumes {

@mattjmcnaughton

This comment has been minimized.

Copy link
Contributor

commented Aug 8, 2019

I believe that the pull-kubernetes-bazel-test failure is legit:

 --- FAIL: TestGetMountedVolumesForPodAndGetVolumesInUse (123.00s)
    volume_manager_test.go:80: Expected success: unmounted volumes=[vol1], unattached volumes=[vol1]: timed out waiting for the condition 

My guess is that the other failures are flake... retesting them now to verify.

/test pull-kubernetes-integration
/test pull-kubernetes-kubemark-e2e-gce-big
/test pull-kubernetes-kubemark-e2e-gce-csi-serial

@wongma7

This comment has been minimized.

Copy link
Contributor

commented Aug 8, 2019

Yes, test failure is legit, @jsafrane volumemanager needs some kind of fix as well since it gets list of volumes to wait for from spec.volumes instead of desired state. I'm sure you can fix it in a nicer way than I did.

@msau42

This comment has been minimized.

Copy link
Member

commented Aug 9, 2019

/assign @jingxu97

@jsafrane jsafrane force-pushed the jsafrane:skip-unused-volumes branch from 7506c53 to fb4c781 Aug 9, 2019

@k8s-ci-robot k8s-ci-robot removed the approved label Aug 9, 2019

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 9, 2019

volumemanager needs some kind of fix as well since it gets list of volumes to wait for from spec.volumes instead of desired state. I'm sure you can fix it in a nicer way than I did.

You're right, I reworked getExpectedVolumes a bit. I used the simplest solution I could think of, not sure it's nicer.

In addition, I moved most of the code to processPodVolumes(), the code got a bit cleaner, but then I needed to pass fake BlockVolumePathHandler to NewVolumeManager for unit tests, so the PR grew a lot...

/hold

TODO:

  • Add e2e test with unused volume
  • Squash the commits

@jsafrane jsafrane force-pushed the jsafrane:skip-unused-volumes branch from fb4c781 to 46589a0 Aug 9, 2019

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 12, 2019

@wongma7, can you please take a look at the current status? I'll squash the commits if this is the way to go.

I could not find any elegant way how to e2e test this. From storage test I can't get the global mount directory or name to check in node.status.volumesInUse easily. I could compare volumesInUse before and after a test pod was created, but that 1) would imply [Serial], 2) it's quite ugly and 3) I am not sure it would work for all plugins. Any ideas?

@wongma7

This comment has been minimized.

Copy link
Contributor

commented Aug 12, 2019

IMO it doesn't need to work for all plugins. Maybe just gcepd, e.g.

func waitForPDInVolumesInUse(
.

Initially I was thinking to ssh into the node and parse the mounts. Then you could e.g. make sure that the test pod UID has 0 (non secret/token etc.)* mounts.


for _, podVolume := range pod.Spec.Volumes {
expectedVolumes = append(expectedVolumes, podVolume.Name)
for _, container := range pod.Spec.Containers {

This comment has been minimized.

Copy link
@wongma7

wongma7 Aug 12, 2019

Contributor

what about initContainers?

This comment has been minimized.

Copy link
@jsafrane

jsafrane Aug 13, 2019

Author Member

Good catch!

I copied the code from VolumeManager and it observes only .containers, not .initContainers. Filed #81343.

I think the best would be to fix makeVolumeMap in #81343 first (to have a small patch), backport it everywhere and then move it to pkg/volume/utils that will be used also here.

This comment has been minimized.

Copy link
@jsafrane

jsafrane Aug 13, 2019

Author Member

Wrote too early, the issue is reproducible only in this PR.

I extracted GetPodVolumeNames and used it consistently in VolumeManager and DSWP.

pvMode: v1.PersistentVolumeBlock,
podMode: v1.PersistentVolumeFilesystem,
expectMount: false,
expectError: true,

This comment has been minimized.

Copy link
@wongma7

wongma7 Aug 12, 2019

Contributor

where does the error come from, is it a timeout or won't expectedVolumes have len 0?

This comment has been minimized.

Copy link
@jsafrane

jsafrane Aug 13, 2019

Author Member

It's not timeout, DSWP has stored error into DSW and WaitForAttachAndMount asynchronously picked it up within ~podAttachAndMountRetryInterval (300ms)

@jsafrane jsafrane force-pushed the jsafrane:skip-unused-volumes branch from 46589a0 to a572ff0 Aug 13, 2019

@k8s-ci-robot k8s-ci-robot added size/XL and removed size/L labels Aug 13, 2019

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 13, 2019

Initially I was thinking to ssh into the node and parse the mounts. Then you could e.g. make sure that the test pod UID has 0 (non secret/token etc.)* mounts.

Ha, we mount unused volumes even to pod directory. I can test that.

jsafrane added 3 commits Aug 15, 2019
Skip unused volumes in VolumeManager
DesiredStateOfWorldPopulator should skip a volume that is not used in any
pod. "Used" means either mounted (via volumeMounts) or used as raw block
device (via volumeDevices).

Especially when block feature is disabled, a block volume must not get into
DesiredStateOfWorld, because it would be formatted and mounted there.
Refactor makeMountsMap into GetPodVolumeNames
The function will be handy in subsequent patches. Also change custom maps
into sets.String.

@jsafrane jsafrane force-pushed the jsafrane:skip-unused-volumes branch from 7a9d142 to 2c79ffe Aug 15, 2019

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2019

Added e2e test. Now it should be complete.

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2019

e2e test added, commits squashed a bit
/hold cancel

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 15, 2019

/retest

mounts = sets.NewString()
devices = sets.NewString()

addContainerVolumes(pod.Spec.Containers, mounts, devices)

This comment has been minimized.

Copy link
@msau42

msau42 Aug 15, 2019

Member

This may need to be updated again after #59484

@wongma7

This comment has been minimized.

Copy link
Contributor

commented Aug 15, 2019

/lgtm
beautiful!

@jsafrane

This comment has been minimized.

Copy link
Member Author

commented Aug 16, 2019

/assign @tallclair @derekwaynecarr
for approval from node point of view

Note that there is a small behavior change - before this PR, kubelet mounted / mapped as raw block everything that was in pod.spec.volumes, now it mounts / maps only things that are in pod.spec.containers[*].volumeMounts / volumeDevices.

@derekwaynecarr

This comment has been minimized.

Copy link
Member

commented Aug 16, 2019

kubelet changes lgtm.

/lgtm
/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Aug 16, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: derekwaynecarr, jsafrane

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@fejta-bot

This comment has been minimized.

Copy link

commented Aug 16, 2019

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit e319abf into kubernetes:master Aug 17, 2019

23 checks passed

cla/linuxfoundation jsafrane authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Job succeeded.
Details
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details

@k8s-ci-robot k8s-ci-robot added this to the v1.16 milestone Aug 17, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.