New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StorageOS configurable device directory and mount options #58816

Merged
merged 2 commits into from Feb 23, 2018

Conversation

Projects
None yet
6 participants
@croomes
Contributor

croomes commented Jan 25, 2018

What this PR does / why we need it:
This allows StorageOS volumes to be mounted when the kubelet is running in a container and we are unable to use the default device location (/var/lib/storageos/volumes). With this PR, the node's device location is requested via the StorageOS api, falling back to the current behaviour if not configured. The node's device location can be supplied as an environment variable (DEVICE_DIR) to the StorageOS container. This is backwards-compatible and no changes are needed to existing deployments.

The PR also allows Mount options to be set for StorageOS volumes in the same way they're enabled for other volume plugins.

The StorageOS API dependency was updated to the latest version, but no functionality changes besides adding the DeviceDir property to the Controller object.

There is also a small refactor of the loopback device handling code in storageos_utils.go to capture stderr output.

Release note:

StorageOS volume plugin updated to support mount options and environments where the kubelet runs in a container and the device location should be specified.

Not sure why godep changed the comments of unrelated packages in Godeps.json...

/sig storage

@@ -323,27 +346,30 @@ func getLoopDevice(path string, exec mount.Exec) (string, error) {
}
args := []string{"-j", path}
out, err := exec.Run(losetupPath, args...)
cmd := exec.Command(losetupPath, args...)

This comment has been minimized.

@jsafrane

jsafrane Jan 29, 2018

Member

Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).

This comment has been minimized.

@croomes

croomes Jan 29, 2018

Contributor

@jsafrane thanks for the review, but quick question as I noticed that I reverted the VolumeHost.GetExec() change you made without fully understanding it.

While we expect the mount utilities (incl losetup) to run where the kubelet runs, we don't require it. We just need the device and mount to be available in the correct place and where the kubelet can see them.

Unless you can think of any gotchas or I'm misunderstanding, I'll revert this bit.

Also fyi: the Block Volume support loop device code seems to have used this as a base, and doesn't use VolumeHost.GetExec(). Not sure if this was intentional, and if it was do I need the same here? https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/util_linux.go. Maybe @mtanino can advise?

This comment has been minimized.

@jsafrane

jsafrane Jan 30, 2018

Member

In 1.9 there is a possibility to run mount utilities (mount.nfs, iscsiadm, mkfs.btrfs, ..., losetup) in a dedicated container instead of the host (which can be a minimal one like CoreOS) or in the container where kubelet runs (which can have just kubelet). The setup is quite elaborate and it is dedicated only for testing in our e2e tests, which run quite minimal GCI image (but it has losetup). VolumeHost.GetExec() is used to execute things in these containers with mount utilities instead of on the host. You probably should use this interface for losetup too, just in case someone has really minimal OS without it.

In addition, it would be great if you could prepare container(s) with the server parts of StorageOS and e2e tests that would deploy it and test the plugin against. This way, we could test our plugin and make sure there are no regressions as the plugin evolves.

This comment has been minimized.

@croomes

croomes Jan 30, 2018

Contributor

Thanks, I'll revert this bit. We're working on the e2e tests separately. I'm hoping we can get a PR for them in the next few weeks.

This comment has been minimized.

@jsafrane

jsafrane Feb 2, 2018

Member

I still see you use exec.Command directly, did you plan to revert it back to mount.Exec interface?

out, err := exec.Run(losetupPath, args...)
func makeLoopDevice(path string) (string, error) {
args := []string{"-f", "-P", "--show", path}
cmd := exec.Command(losetupPath, args...)

This comment has been minimized.

@jsafrane

jsafrane Jan 29, 2018

Member

Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).

args := []string{"-d", device}
out, err := exec.Run(losetupPath, args...)
cmd := exec.Command(losetupPath, args...)

This comment has been minimized.

@jsafrane

jsafrane Jan 29, 2018

Member

Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).

@jsafrane

This comment has been minimized.

Member

jsafrane commented Jan 29, 2018

/ok-to-test

@jsafrane

This comment has been minimized.

Member

jsafrane commented Jan 31, 2018

/retest

@jsafrane

This comment has been minimized.

Member

jsafrane commented Jan 31, 2018

flake: #58975

@cblecker

This comment has been minimized.

Member

cblecker commented Jan 31, 2018

/retest

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 6, 2018

@jsafrane reverted the exec changes, but just noticed the Godeps conflict, will resolve tomorrow.

croomes added some commits Feb 6, 2018

StorageOS support for containerized kubelet and mount options
This change queries the StorageOS API to retrieve the volume
device dir when a volume is attached, rather than relying on
the default /var/lib/storageos/volumes directory.  This allows
all or some nodes to have kubelet running in a container with
the StorageOS volume path mounted in, which is required for
Azure ACS support.

Volume mount options are also supported.
@croomes

This comment has been minimized.

Contributor

croomes commented Feb 7, 2018

@jsafrane I think this is good to go now, PTAL - thanks

@jsafrane

This comment has been minimized.

Member

jsafrane commented Feb 9, 2018

/lgtm

@jsafrane

This comment has been minimized.

Member

jsafrane commented Feb 9, 2018

/assign @thockin
for vendor approval

@jsafrane

This comment has been minimized.

Member

jsafrane commented Feb 12, 2018

/approve no-issue
/retest

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 12, 2018

/test pull-kubernetes-e2e-kubeadm-gce-canary

2 similar comments
@croomes

This comment has been minimized.

Contributor

croomes commented Feb 13, 2018

/test pull-kubernetes-e2e-kubeadm-gce-canary

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 15, 2018

/test pull-kubernetes-e2e-kubeadm-gce-canary

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Feb 15, 2018

@croomes: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kubeadm-gce-canary 1a91403 link /test pull-kubernetes-e2e-kubeadm-gce-canary

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 19, 2018

/test pull-kubernetes-e2e-kubeadm-gce-canary

@thockin

This comment has been minimized.

Member

thockin commented Feb 23, 2018

In the future, please file bugs so we can xref them in PRs.

/lgtm
/approve no-issue

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Feb 23, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: croomes, jsafrane, thockin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Feb 23, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Feb 23, 2018

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-merge-robot k8s-merge-robot merged commit 9a75b4d into kubernetes:master Feb 23, 2018

12 of 14 checks passed

pull-kubernetes-e2e-kubeadm-gce-canary Job failed.
Details
Submit Queue Required Github CI test is not green: pull-kubernetes-verify
Details
cla/linuxfoundation croomes authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-unit Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
@croomes

This comment has been minimized.

Contributor

croomes commented Feb 27, 2018

/status approved-for-milestone

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Feb 27, 2018

You must be a member of the kubernetes/kubernetes-milestone-maintainers github team to add status labels.

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 27, 2018

@saad-ali are you able to help get this into 1.10 release? Was hoping it would have gone in automatically...

@jsafrane

This comment has been minimized.

Member

jsafrane commented Feb 27, 2018

Don't worry, it's merged into master so it's going to be part of 1.10. It splits from master in upcoming week(s).

@croomes

This comment has been minimized.

Contributor

croomes commented Feb 27, 2018

Fantastic, thanks for letting me know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment