Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flexvolume: rework for the new volume controller design #26926

Closed
wants to merge 1 commit into from

Conversation

@mcluseau
Copy link
Contributor

mcluseau commented Jun 7, 2016

* This commit redesigns the flexvolume plugin and exposes changes to the internal storage API including the Attacher interface.
* WARNING: The flexvolume plugin API has changed with this release. Existing Flex drivers will have to be modified to work with the new interface (see Flex volume documentation for details).

This change is Reviewable

@googlebot googlebot added the cla: yes label Jun 7, 2016
@k8s-bot
Copy link

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

2 similar comments
@k8s-bot
Copy link

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@k8s-bot
Copy link

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@pmorie
Copy link
Member

pmorie commented Jun 7, 2016

@k8s-bot ok to test

@saad-ali saad-ali self-assigned this Jun 7, 2016
@saad-ali
Copy link
Member

saad-ali commented Jun 7, 2016

Thanks for jumping on this @MikaelCluseau
I'll try to take a look at this tmr.

@mcluseau mcluseau force-pushed the mcluseau:wip-issue-20262 branch from e7d5004 to 8d1a332 Jun 7, 2016
@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 7, 2016

A few notes:

  • I'm splitting commits in small change steps (with hope it helps review);
  • we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?
  • I don't know what to do of the hostname for now.
@k8s-github-robot k8s-github-robot added size/L and removed size/M labels Jun 7, 2016
@pmorie
Copy link
Member

pmorie commented Jun 7, 2016

we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?

That's a bit of a nasty tangle. You'll need to know the namespace this is being attached for.

@pmorie
Copy link
Member

pmorie commented Jun 7, 2016

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at all possible
@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 7, 2016

On 06/08/2016 03:33 AM, Paul Morie wrote:

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at
    all possible

I understand, but FlexVolume currently have this feature of putting a
secret's keys/values in the options sent to its plugin. And you cannot
get a secret without its namespace. I see the following possibilities:

  1. add the namespace to volume.Spec (not what you want);
  2. add the namespace to the parameters of the Attacher interface
    (probably not what you want too);
  3. add the namespace to the SecretRef of the FlexVolume before calling
    the Attach or Mount (probably meaning specific code in a generic
    part + a hidden field in an API object);
  4. discard this feature (breaking the API, but it's tagged alpha anyway).

History showed that I don't always have all to alternatives in mind, but
for now it looks like a "choose your evil".

@chakri-nelluri
Copy link
Contributor

chakri-nelluri commented Jun 8, 2016

Thanks for jumping in @MikaelCluseau. I was on a short vacation. Just catching up on github. Since you already started this exercise, I can help you review and verify it.

@saad-ali
Copy link
Member

saad-ali commented Jun 8, 2016

Plugins inferring namespace from pod to fetch secrets is problematic.

Between the options @MikaelCluseau listed:

1 add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in volume.Spec is not ideal, but volume.Spec is entirely internal so it doesn't require any API changes which is nice.

2 add the namespace to the parameters of the Attacher interface (probably not what you want too);

This is equally bad/good as 1.

3 add the namespace to the SecretRef of the FlexVolume before calling the Attach or Mount (probably meaning specific code in a generic part + a hidden field in an API object);

This would work but it is super hacky. I say no.

4 discard this feature (breaking the API, but it's tagged alpha anyway).

Discarding the feature is an option for Flex for the reason you stated. But other plugins like rbd have the same issue, so we need to solve it.

There is a 5th option:

Modify the API by to have SecretRef be a *ObjectReference type instead of *LocalObjectReference. *ObjectReference contains the namespace as well as the name. This seems like the "correct" solution--when you reference a secret you must provide the full reference (including the namespace). But, it requires backwards compatibility considerations. Let's discuss this in the 9 AM PST sig-storage meeting. I'll add it to the agenda.

@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 8, 2016

On 06/09/2016 08:40 AM, Chakravarthy Nelluri wrote:

I can help you review and verify it.

Thanks, that's good for me because I'm an occasionnal contributor and
the last I want is to introduce bad code.

@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 8, 2016

On 06/09/2016 08:48 AM, Saad Ali wrote:

Between the options @MikaelCluseau https://github.com/MikaelCluseau
listed:

 1. add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in |volume.Spec| is not ideal, but
volume.Spec is entirely internal so it doesn't require any API changes
which is nice.

[...]

There is a 5th option:

Modify the API by to have |SecretRef| be a |_ObjectReference| type
instead of |_LocalObjectReference|. |*ObjectReference| contains the
namespace as well as the name. This seems like the "correct"
solution--when you reference a secret you must provide the full
reference (including the namespace). But, it requires backwards
compatibility considerations. Let's discuss this in the 9 AM PST
sig-storage meeting. I'll add it to the agenda.

I agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

@saad-ali
Copy link
Member

saad-ali commented Jun 8, 2016

I spoke with @thockin offline. He had a good point which is because these secret references are inside a volume definition that is ultimately referenced by a Pod, LocalObjectReference is the correct way to reference the secret. The user should not have to respecify the namespace it should be the same namespace as the pod. Also the user should not be able to reference a secret from a different namespace than the pod namespace. I agree with these points.

Therefore, I say let's go with #2 for now. @pmorie, speak up if you disagree.

@saad-ali
Copy link
Member

saad-ali commented Jun 8, 2016

agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

Just saw your comment @MikaelCluseau. Yes, we are in agreement. Either 1 or 2 are fine with me. If @pmorie is ok with it, could you implement one of these approaches?

@chakri-nelluri
Copy link
Contributor

chakri-nelluri commented Jun 8, 2016

Thanks @saad-ali I like it too. The final consumer is Pod and works for our flex volume use case too.

@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 8, 2016

On 06/09/2016 10:04 AM, Saad Ali wrote:

Just saw your comment @MikaelCluseau
https://github.com/MikaelCluseau. Yes, we are in agreement. Either 1
or 2 are fine with me. If @pmorie https://github.com/pmorie is ok
with it, could you implement one of these approaches?

Sure, I'll do whatever is needed to get at least FlexVolume to this
point. I prefer the 1st approach thought, because it specifies
information available to plugins in one central place.

@saad-ali
Copy link
Member

saad-ali commented Jun 8, 2016

Sure, I'll do whatever is needed to get at least FlexVolume to this point. I prefer the 1st approach thought, because it specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie has beef, we'll deal with it 👊 😆

@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 9, 2016

On 06/09/2016 10:34 AM, Saad Ali wrote:

Sure, I'll do whatever is needed to get at least FlexVolume to
this point. I prefer the 1st approach thought, because it
specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie
https://github.com/pmorie has beef, we'll deal with it 👊 😆

okay ;-) I'm adding PodNamespace and PodName as the pod's name was used
in error logging un FlexVolume and there's a strong link with the Pod
anyway (may be useful for other plugins too).

@mcluseau
Copy link
Contributor Author

mcluseau commented Jun 9, 2016

On 06/09/2016 02:00 PM, Mikaël Cluseau wrote:

okay ;-) I'm adding PodNamespace and PodName as the pod's name was
used in error logging un FlexVolume and there's a strong link with the
Pod anyway (may be useful for other plugins too).

and the PodUID... because of

func (f *flexVolumeMounter) GetDeviceMountPath(spec *volume.Spec) string {
name := f.driverName
return f.plugin.host.GetPodVolumeDir(f.podUID,
utilstrings.EscapeQualifiedNameForDisk(name), spec.Name())
}

@mcluseau mcluseau force-pushed the mcluseau:wip-issue-20262 branch 2 times, most recently from 4a9b0cd to ee932f4 Jun 9, 2016
@mcluseau mcluseau force-pushed the mcluseau:wip-issue-20262 branch from 94b4935 to a4a7bae Nov 29, 2016
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA.

Once you've signed, please reply here (e.g. "I signed it!") and we'll verify. Thanks.


If you have questions or suggestions related to this bot's behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GCI GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins kops AWS e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kops aws e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins unit/integration failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot unit test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GCI GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins Kubemark GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kubemark e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE etcd3 e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gce etcd3 e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins CRI GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cri node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 29, 2016

Jenkins verification failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot verify test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-github-robot
Copy link
Contributor

k8s-github-robot commented Dec 9, 2016

@MikaelCluseau PR needs rebase

@k8s-github-robot
Copy link
Contributor

k8s-github-robot commented Jan 23, 2017

[APPROVALNOTIFIER] Needs approval from an approver in each of these OWNERS Files:

We suggest the following people:
cc @smarterclayton
You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@mcluseau
Copy link
Contributor Author

mcluseau commented Feb 23, 2017

Will be merged as part of #41804.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

You can’t perform that action at this time.