New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flexvolume: rework for the new volume controller design #26926

Closed
wants to merge 1 commit into
from

Conversation

@mcluseau
Contributor

mcluseau commented Jun 7, 2016

* This commit redesigns the flexvolume plugin and exposes changes to the internal storage API including the Attacher interface.
* WARNING: The flexvolume plugin API has changed with this release. Existing Flex drivers will have to be modified to work with the new interface (see Flex volume documentation for details).

This change is Reviewable

@googlebot googlebot added the cla: yes label Jun 7, 2016

@k8s-bot

This comment has been minimized.

Show comment
Hide comment
@k8s-bot

k8s-bot Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@k8s-bot

This comment has been minimized.

Show comment
Hide comment
@k8s-bot

k8s-bot Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@k8s-bot

This comment has been minimized.

Show comment
Hide comment
@k8s-bot

k8s-bot Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

k8s-bot commented Jun 7, 2016

Can one of the admins verify that this patch is reasonable to test? If so, please reply "ok to test".
(Note: "add to whitelist" is no longer supported. Please update configurations in kubernetes/test-infra/jenkins/job-configs/kubernetes-jenkins-pull instead.)

This message may repeat a few times in short succession due to jenkinsci/ghprb-plugin#292. Sorry.

Otherwise, if this message is too spammy, please complain to ixdy.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Jun 7, 2016

Member

@k8s-bot ok to test

Member

pmorie commented Jun 7, 2016

@k8s-bot ok to test

@saad-ali saad-ali self-assigned this Jun 7, 2016

@saad-ali

This comment has been minimized.

Show comment
Hide comment
@saad-ali

saad-ali Jun 7, 2016

Member

Thanks for jumping on this @MikaelCluseau
I'll try to take a look at this tmr.

Member

saad-ali commented Jun 7, 2016

Thanks for jumping on this @MikaelCluseau
I'll try to take a look at this tmr.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 7, 2016

Contributor

A few notes:

  • I'm splitting commits in small change steps (with hope it helps review);
  • we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?
  • I don't know what to do of the hostname for now.
Contributor

mcluseau commented Jun 7, 2016

A few notes:

  • I'm splitting commits in small change steps (with hope it helps review);
  • we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?
  • I don't know what to do of the hostname for now.

@k8s-merge-robot k8s-merge-robot added size/L and removed size/M labels Jun 7, 2016

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Jun 7, 2016

Member

we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?

That's a bit of a nasty tangle. You'll need to know the namespace this is being attached for.

Member

pmorie commented Jun 7, 2016

we need the pod namespace to get the secrets from the secretRef, maybe the namespace should be in volume.Spec?

That's a bit of a nasty tangle. You'll need to know the namespace this is being attached for.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Jun 7, 2016

Member

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at all possible
Member

pmorie commented Jun 7, 2016

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at all possible
@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 7, 2016

Contributor

On 06/08/2016 03:33 AM, Paul Morie wrote:

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at
    all possible

I understand, but FlexVolume currently have this feature of putting a
secret's keys/values in the options sent to its plugin. And you cannot
get a secret without its namespace. I see the following possibilities:

  1. add the namespace to volume.Spec (not what you want);
  2. add the namespace to the parameters of the Attacher interface
    (probably not what you want too);
  3. add the namespace to the SecretRef of the FlexVolume before calling
    the Attach or Mount (probably meaning specific code in a generic
    part + a hidden field in an API object);
  4. discard this feature (breaking the API, but it's tagged alpha anyway).

History showed that I don't always have all to alternatives in mind, but
for now it looks like a "choose your evil".

Contributor

mcluseau commented Jun 7, 2016

On 06/08/2016 03:33 AM, Paul Morie wrote:

Let me elaborate on my last comment:

  1. We currently do not allow cross-namespace references to secret
  2. PersistentVolumes are cluster-scoped (ie, non-namespaced)
  3. I would like to avoid exposing namespace into volume spec if at
    all possible

I understand, but FlexVolume currently have this feature of putting a
secret's keys/values in the options sent to its plugin. And you cannot
get a secret without its namespace. I see the following possibilities:

  1. add the namespace to volume.Spec (not what you want);
  2. add the namespace to the parameters of the Attacher interface
    (probably not what you want too);
  3. add the namespace to the SecretRef of the FlexVolume before calling
    the Attach or Mount (probably meaning specific code in a generic
    part + a hidden field in an API object);
  4. discard this feature (breaking the API, but it's tagged alpha anyway).

History showed that I don't always have all to alternatives in mind, but
for now it looks like a "choose your evil".

@chakri-nelluri

This comment has been minimized.

Show comment
Hide comment
@chakri-nelluri

chakri-nelluri Jun 8, 2016

Contributor

Thanks for jumping in @MikaelCluseau. I was on a short vacation. Just catching up on github. Since you already started this exercise, I can help you review and verify it.

Contributor

chakri-nelluri commented Jun 8, 2016

Thanks for jumping in @MikaelCluseau. I was on a short vacation. Just catching up on github. Since you already started this exercise, I can help you review and verify it.

@saad-ali

This comment has been minimized.

Show comment
Hide comment
@saad-ali

saad-ali Jun 8, 2016

Member

Plugins inferring namespace from pod to fetch secrets is problematic.

Between the options @MikaelCluseau listed:

1 add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in volume.Spec is not ideal, but volume.Spec is entirely internal so it doesn't require any API changes which is nice.

2 add the namespace to the parameters of the Attacher interface (probably not what you want too);

This is equally bad/good as 1.

3 add the namespace to the SecretRef of the FlexVolume before calling the Attach or Mount (probably meaning specific code in a generic part + a hidden field in an API object);

This would work but it is super hacky. I say no.

4 discard this feature (breaking the API, but it's tagged alpha anyway).

Discarding the feature is an option for Flex for the reason you stated. But other plugins like rbd have the same issue, so we need to solve it.

There is a 5th option:

Modify the API by to have SecretRef be a *ObjectReference type instead of *LocalObjectReference. *ObjectReference contains the namespace as well as the name. This seems like the "correct" solution--when you reference a secret you must provide the full reference (including the namespace). But, it requires backwards compatibility considerations. Let's discuss this in the 9 AM PST sig-storage meeting. I'll add it to the agenda.

Member

saad-ali commented Jun 8, 2016

Plugins inferring namespace from pod to fetch secrets is problematic.

Between the options @MikaelCluseau listed:

1 add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in volume.Spec is not ideal, but volume.Spec is entirely internal so it doesn't require any API changes which is nice.

2 add the namespace to the parameters of the Attacher interface (probably not what you want too);

This is equally bad/good as 1.

3 add the namespace to the SecretRef of the FlexVolume before calling the Attach or Mount (probably meaning specific code in a generic part + a hidden field in an API object);

This would work but it is super hacky. I say no.

4 discard this feature (breaking the API, but it's tagged alpha anyway).

Discarding the feature is an option for Flex for the reason you stated. But other plugins like rbd have the same issue, so we need to solve it.

There is a 5th option:

Modify the API by to have SecretRef be a *ObjectReference type instead of *LocalObjectReference. *ObjectReference contains the namespace as well as the name. This seems like the "correct" solution--when you reference a secret you must provide the full reference (including the namespace). But, it requires backwards compatibility considerations. Let's discuss this in the 9 AM PST sig-storage meeting. I'll add it to the agenda.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 8, 2016

Contributor

On 06/09/2016 08:40 AM, Chakravarthy Nelluri wrote:

I can help you review and verify it.

Thanks, that's good for me because I'm an occasionnal contributor and
the last I want is to introduce bad code.

Contributor

mcluseau commented Jun 8, 2016

On 06/09/2016 08:40 AM, Chakravarthy Nelluri wrote:

I can help you review and verify it.

Thanks, that's good for me because I'm an occasionnal contributor and
the last I want is to introduce bad code.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 8, 2016

Contributor

On 06/09/2016 08:48 AM, Saad Ali wrote:

Between the options @MikaelCluseau https://github.com/MikaelCluseau
listed:

 1. add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in |volume.Spec| is not ideal, but
volume.Spec is entirely internal so it doesn't require any API changes
which is nice.

[...]

There is a 5th option:

Modify the API by to have |SecretRef| be a |_ObjectReference| type
instead of |_LocalObjectReference|. |*ObjectReference| contains the
namespace as well as the name. This seems like the "correct"
solution--when you reference a secret you must provide the full
reference (including the namespace). But, it requires backwards
compatibility considerations. Let's discuss this in the 9 AM PST
sig-storage meeting. I'll add it to the agenda.

I agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

Contributor

mcluseau commented Jun 8, 2016

On 06/09/2016 08:48 AM, Saad Ali wrote:

Between the options @MikaelCluseau https://github.com/MikaelCluseau
listed:

 1. add the namespace to volume.Spec (not what you want)

Agreed that sticking the namespace in |volume.Spec| is not ideal, but
volume.Spec is entirely internal so it doesn't require any API changes
which is nice.

[...]

There is a 5th option:

Modify the API by to have |SecretRef| be a |_ObjectReference| type
instead of |_LocalObjectReference|. |*ObjectReference| contains the
namespace as well as the name. This seems like the "correct"
solution--when you reference a secret you must provide the full
reference (including the namespace). But, it requires backwards
compatibility considerations. Let's discuss this in the 9 AM PST
sig-storage meeting. I'll add it to the agenda.

I agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

@saad-ali

This comment has been minimized.

Show comment
Hide comment
@saad-ali

saad-ali Jun 8, 2016

Member

I spoke with @thockin offline. He had a good point which is because these secret references are inside a volume definition that is ultimately referenced by a Pod, LocalObjectReference is the correct way to reference the secret. The user should not have to respecify the namespace it should be the same namespace as the pod. Also the user should not be able to reference a secret from a different namespace than the pod namespace. I agree with these points.

Therefore, I say let's go with #2 for now. @pmorie, speak up if you disagree.

Member

saad-ali commented Jun 8, 2016

I spoke with @thockin offline. He had a good point which is because these secret references are inside a volume definition that is ultimately referenced by a Pod, LocalObjectReference is the correct way to reference the secret. The user should not have to respecify the namespace it should be the same namespace as the pod. Also the user should not be able to reference a secret from a different namespace than the pod namespace. I agree with these points.

Therefore, I say let's go with #2 for now. @pmorie, speak up if you disagree.

@saad-ali

This comment has been minimized.

Show comment
Hide comment
@saad-ali

saad-ali Jun 8, 2016

Member

agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

Just saw your comment @MikaelCluseau. Yes, we are in agreement. Either 1 or 2 are fine with me. If @pmorie is ok with it, could you implement one of these approaches?

Member

saad-ali commented Jun 8, 2016

agree it looks like the correct way. The only drawback I can see here
is that users will have to specify the namespace, and that namespace
will be constrained to the pod's namespace. That said, the API server
can do the grunt work here (fill it with the pod's namespace by default
and verify the constraint). An practical path could be to add the
namespace in volume.Spec as a first step, and move to the 5th option
afterwards.

Just saw your comment @MikaelCluseau. Yes, we are in agreement. Either 1 or 2 are fine with me. If @pmorie is ok with it, could you implement one of these approaches?

@chakri-nelluri

This comment has been minimized.

Show comment
Hide comment
@chakri-nelluri

chakri-nelluri Jun 8, 2016

Contributor

Thanks @saad-ali I like it too. The final consumer is Pod and works for our flex volume use case too.

Contributor

chakri-nelluri commented Jun 8, 2016

Thanks @saad-ali I like it too. The final consumer is Pod and works for our flex volume use case too.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 8, 2016

Contributor

On 06/09/2016 10:04 AM, Saad Ali wrote:

Just saw your comment @MikaelCluseau
https://github.com/MikaelCluseau. Yes, we are in agreement. Either 1
or 2 are fine with me. If @pmorie https://github.com/pmorie is ok
with it, could you implement one of these approaches?

Sure, I'll do whatever is needed to get at least FlexVolume to this
point. I prefer the 1st approach thought, because it specifies
information available to plugins in one central place.

Contributor

mcluseau commented Jun 8, 2016

On 06/09/2016 10:04 AM, Saad Ali wrote:

Just saw your comment @MikaelCluseau
https://github.com/MikaelCluseau. Yes, we are in agreement. Either 1
or 2 are fine with me. If @pmorie https://github.com/pmorie is ok
with it, could you implement one of these approaches?

Sure, I'll do whatever is needed to get at least FlexVolume to this
point. I prefer the 1st approach thought, because it specifies
information available to plugins in one central place.

@saad-ali

This comment has been minimized.

Show comment
Hide comment
@saad-ali

saad-ali Jun 8, 2016

Member

Sure, I'll do whatever is needed to get at least FlexVolume to this point. I prefer the 1st approach thought, because it specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie has beef, we'll deal with it 👊 😆

Member

saad-ali commented Jun 8, 2016

Sure, I'll do whatever is needed to get at least FlexVolume to this point. I prefer the 1st approach thought, because it specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie has beef, we'll deal with it 👊 😆

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 9, 2016

Contributor

On 06/09/2016 10:34 AM, Saad Ali wrote:

Sure, I'll do whatever is needed to get at least FlexVolume to
this point. I prefer the 1st approach thought, because it
specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie
https://github.com/pmorie has beef, we'll deal with it 👊 😆

okay ;-) I'm adding PodNamespace and PodName as the pod's name was used
in error logging un FlexVolume and there's a strong link with the Pod
anyway (may be useful for other plugins too).

Contributor

mcluseau commented Jun 9, 2016

On 06/09/2016 10:34 AM, Saad Ali wrote:

Sure, I'll do whatever is needed to get at least FlexVolume to
this point. I prefer the 1st approach thought, because it
specifies information available to plugins in one central place.

Awesome, 1 is fine with me. Let's go with that. If @pmorie
https://github.com/pmorie has beef, we'll deal with it 👊 😆

okay ;-) I'm adding PodNamespace and PodName as the pod's name was used
in error logging un FlexVolume and there's a strong link with the Pod
anyway (may be useful for other plugins too).

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 9, 2016

Contributor

On 06/09/2016 02:00 PM, Mikaël Cluseau wrote:

okay ;-) I'm adding PodNamespace and PodName as the pod's name was
used in error logging un FlexVolume and there's a strong link with the
Pod anyway (may be useful for other plugins too).

and the PodUID... because of

func (f *flexVolumeMounter) GetDeviceMountPath(spec *volume.Spec) string {
name := f.driverName
return f.plugin.host.GetPodVolumeDir(f.podUID,
utilstrings.EscapeQualifiedNameForDisk(name), spec.Name())
}

Contributor

mcluseau commented Jun 9, 2016

On 06/09/2016 02:00 PM, Mikaël Cluseau wrote:

okay ;-) I'm adding PodNamespace and PodName as the pod's name was
used in error logging un FlexVolume and there's a strong link with the
Pod anyway (may be useful for other plugins too).

and the PodUID... because of

func (f *flexVolumeMounter) GetDeviceMountPath(spec *volume.Spec) string {
name := f.driverName
return f.plugin.host.GetPodVolumeDir(f.podUID,
utilstrings.EscapeQualifiedNameForDisk(name), spec.Name())
}

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Jun 9, 2016

Member

Does this become all of ObjectMeta?

On Wed, Jun 8, 2016 at 8:20 PM, Mikaël Cluseau notifications@github.com
wrote:

On 06/09/2016 02:00 PM, Mikaël Cluseau wrote:

okay ;-) I'm adding PodNamespace and PodName as the pod's name was
used in error logging un FlexVolume and there's a strong link with the
Pod anyway (may be useful for other plugins too).

and the PodUID... because of

func (f *flexVolumeMounter) GetDeviceMountPath(spec *volume.Spec) string {
name := f.driverName
return f.plugin.host.GetPodVolumeDir(f.podUID,
utilstrings.EscapeQualifiedNameForDisk(name), spec.Name())

}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#26926 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AFVgVDsS6pel_leA0x43zK4cFU5FixXgks5qJ4aSgaJpZM4Ivd3k
.

Member

thockin commented Jun 9, 2016

Does this become all of ObjectMeta?

On Wed, Jun 8, 2016 at 8:20 PM, Mikaël Cluseau notifications@github.com
wrote:

On 06/09/2016 02:00 PM, Mikaël Cluseau wrote:

okay ;-) I'm adding PodNamespace and PodName as the pod's name was
used in error logging un FlexVolume and there's a strong link with the
Pod anyway (may be useful for other plugins too).

and the PodUID... because of

func (f *flexVolumeMounter) GetDeviceMountPath(spec *volume.Spec) string {
name := f.driverName
return f.plugin.host.GetPodVolumeDir(f.podUID,
utilstrings.EscapeQualifiedNameForDisk(name), spec.Name())

}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#26926 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AFVgVDsS6pel_leA0x43zK4cFU5FixXgks5qJ4aSgaJpZM4Ivd3k
.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 9, 2016

Contributor

On 06/09/2016 04:23 PM, Tim Hockin wrote:

Does this become all of ObjectMeta?

I was wondering exactly that, like "maybe some plugin will want the
annotations". But in this case, I think it could query the kubelet for
extra info. So for now, this is the smallest amount of data required.

Contributor

mcluseau commented Jun 9, 2016

On 06/09/2016 04:23 PM, Tim Hockin wrote:

Does this become all of ObjectMeta?

I was wondering exactly that, like "maybe some plugin will want the
annotations". But in this case, I think it could query the kubelet for
extra info. So for now, this is the smallest amount of data required.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 9, 2016

Contributor

On 06/09/2016 04:23 PM, Tim Hockin wrote:

Does this become all of ObjectMeta?

BTW, I tried to anticipate that by using this interface:

func (spec *Spec) SetPodInfo(pod *api.Pod)

(see last commit)

Contributor

mcluseau commented Jun 9, 2016

On 06/09/2016 04:23 PM, Tim Hockin wrote:

Does this become all of ObjectMeta?

BTW, I tried to anticipate that by using this interface:

func (spec *Spec) SetPodInfo(pod *api.Pod)

(see last commit)

@sgotti

This comment has been minimized.

Show comment
Hide comment
@sgotti

sgotti Jun 9, 2016

Contributor

@MikaelCluseau @saad-ali
from #20262 (comment) implementations of the attach/detach interfaces may be executed by any host.

Moving the flexvolume plugins attach/detach/mount/unmount methods to follow the attacher/detacher/mounter/unmounter interfaces and logic will make the flexvolume interface incompatible with the current one. For example the current flexvolume doc and the lvm example in https://github.com/kubernetes/kubernetes/tree/master/examples/flexvolume won't work since attach can be executed by any host, so the lvm example's attach/detach parts should be moved inside the mount/unmount functions.

Since this backward incompatible change can break current flexvolume plugins, perhaps a new kind of flexvolume plugin (with another name) should be added instead?

Edit: noticed that flex volume is marked as alpha, so there's probably no need for backward compatibility.

I don't know what to do of the hostname for now.

The hostname have to be passed to the flexvolume plugin attach/detach methods since the flex volume plugin attach method needs to know the hostname of the target node.

Contributor

sgotti commented Jun 9, 2016

@MikaelCluseau @saad-ali
from #20262 (comment) implementations of the attach/detach interfaces may be executed by any host.

Moving the flexvolume plugins attach/detach/mount/unmount methods to follow the attacher/detacher/mounter/unmounter interfaces and logic will make the flexvolume interface incompatible with the current one. For example the current flexvolume doc and the lvm example in https://github.com/kubernetes/kubernetes/tree/master/examples/flexvolume won't work since attach can be executed by any host, so the lvm example's attach/detach parts should be moved inside the mount/unmount functions.

Since this backward incompatible change can break current flexvolume plugins, perhaps a new kind of flexvolume plugin (with another name) should be added instead?

Edit: noticed that flex volume is marked as alpha, so there's probably no need for backward compatibility.

I don't know what to do of the hostname for now.

The hostname have to be passed to the flexvolume plugin attach/detach methods since the flex volume plugin attach method needs to know the hostname of the target node.

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Jun 9, 2016

Contributor

On 06/09/2016 11:18 PM, Simone Gotti wrote:

I don't know what to do of the hostname for now.

The hostname have to be passed to the flexvolume plugin attach/detach
methods since the flex volume plugin attach method needs to know the
hostname of the target node.

Your concerns about breaking are the actual reason I don't know what to
do of the hostname for now :-) Let me explain more.

If the attach logic make sense only with the actual host being up, then
this logic can be run directly on the target host via something (call to
the kubelet? ssh exec? call attach in WaitForAttach instead?) that will
run the flexvolume plugin on the host. In this case, the LVM example is
not broken by this change. The detach logic is not the subject here, but
it's noop in the LVM example so it won't be broken, and I'd say it's
because detach doesn't make sense in this case.

But this "host up" assumption may also be wrong (ie: we want flexvolume
to support cold or externally managed attachment). In this case, there's
much more work to be done (like, how to pass the device created by
attach) and your compatibility concerns become real (but flexvolume is
still alpha).

Sooo yeah, I'm still not sure about what to do of this hostname, esp.
with e2e tests currently green :-)

Contributor

mcluseau commented Jun 9, 2016

On 06/09/2016 11:18 PM, Simone Gotti wrote:

I don't know what to do of the hostname for now.

The hostname have to be passed to the flexvolume plugin attach/detach
methods since the flex volume plugin attach method needs to know the
hostname of the target node.

Your concerns about breaking are the actual reason I don't know what to
do of the hostname for now :-) Let me explain more.

If the attach logic make sense only with the actual host being up, then
this logic can be run directly on the target host via something (call to
the kubelet? ssh exec? call attach in WaitForAttach instead?) that will
run the flexvolume plugin on the host. In this case, the LVM example is
not broken by this change. The detach logic is not the subject here, but
it's noop in the LVM example so it won't be broken, and I'd say it's
because detach doesn't make sense in this case.

But this "host up" assumption may also be wrong (ie: we want flexvolume
to support cold or externally managed attachment). In this case, there's
much more work to be done (like, how to pass the device created by
attach) and your compatibility concerns become real (but flexvolume is
still alpha).

Sooo yeah, I'm still not sure about what to do of this hostname, esp.
with e2e tests currently green :-)

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
} else if spec.PersistentVolume != nil && spec.PersistentVolume.Spec.FlexVolume != nil {
source = spec.PersistentVolume.Spec.FlexVolume
switch {
case spec.Volume != nil && spec.Volume.FlexVolume != nil:

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Use getVolumeSource() you have in the code below and remove this one.
Nit - I prefer the old one.. more readable. It might be just me :)

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Use getVolumeSource() you have in the code below and remove this one.
Nit - I prefer the old one.. more readable. It might be just me :)

This comment has been minimized.

@mcluseau

mcluseau Jun 9, 2016

Contributor

I changed it this way because I've seen this: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/plugins.go#L213
Oh and yes, I don't know why I put this one oO

@mcluseau

mcluseau Jun 9, 2016

Contributor

I changed it this way because I've seen this: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/plugins.go#L213
Oh and yes, I don't know why I put this one oO

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Ack..

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Ack..

@chakri-nelluri

This comment has been minimized.

Show comment
Hide comment
@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

@MikaelCluseau when you get a chance, can you please squash all the changes into one diff?

Contributor

chakri-nelluri commented Jun 9, 2016

@MikaelCluseau when you get a chance, can you please squash all the changes into one diff?

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
@@ -241,76 +245,142 @@ func (f flexVolumeMounter) GetAttributes() volume.Attributes {
// flexVolumeManager is the abstract interface to flex volume ops.
type flexVolumeManager interface {
// Attaches the disk to the kubelet's host machine.
attach(mounter *flexVolumeMounter) (string, error)
attach(mounter *flexVolumeMounter, volOptions map[string]string) (string, error)

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

We have to add hostname to attach call

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

We have to add hostname to attach call

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
@@ -241,76 +245,142 @@ func (f flexVolumeMounter) GetAttributes() volume.Attributes {
// flexVolumeManager is the abstract interface to flex volume ops.
type flexVolumeManager interface {
// Attaches the disk to the kubelet's host machine.
attach(mounter *flexVolumeMounter) (string, error)
attach(mounter *flexVolumeMounter, volOptions map[string]string) (string, error)
// Detaches the disk from the kubelet's host machine.
detach(unmounter *flexVolumeUnmounter, dir string) error

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Same here.. we need to add hostname to detach call. If not plugin will not be able to figure out which node to talk to.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Same here.. we need to add hostname to detach call. If not plugin will not be able to figure out which node to talk to.

This comment has been minimized.

@mcluseau

mcluseau Jun 9, 2016

Contributor

do you mind if I do that when I'll implement the detacher interface? I try to focus on attach here.

@mcluseau

mcluseau Jun 9, 2016

Contributor

do you mind if I do that when I'll implement the detacher interface? I try to focus on attach here.

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Sure..SGTM.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Sure..SGTM.

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
notmnt, err := f.blockDeviceMounter.IsLikelyNotMountPoint(dir)
if err != nil && !os.IsNotExist(err) {
glog.Errorf("Cannot validate mountpoint: %s", dir)
err := f.Attach(f.spec, "")

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Pass hostname.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Pass hostname.

This comment has been minimized.

@mcluseau

mcluseau Jun 9, 2016

Contributor

I don't where to get it from here; f.plugin.host doesn't expose it. I can put localhost, thought. SetUpAt is supposed to disappear after the move the plugin, right?

@mcluseau

mcluseau Jun 9, 2016

Contributor

I don't where to get it from here; f.plugin.host doesn't expose it. I can put localhost, thought. SetUpAt is supposed to disappear after the move the plugin, right?

This comment has been minimized.

@saad-ali

saad-ali Jun 9, 2016

Member

No--Setup/SetupAt/TearDown/TearDownAt should remain since they are required for mounting and part of the Mounter interface which all plugins should implement. The only thing to do with them is to remove the attach calls from them and move those to the Attacher interface methods.

The MountDevice method, part of the Attacher interface, is designed so that volumes that require attachment, can have a two step mount. MountDevice mounts the device to a global mount path. And then Setup bind mounts the global mount path to the pod. It's up to plugin writers to decide exactly what they will do for each call.

So Flex should implement the following calls:

Mounter:

  • Setup()/SetUpAt() calls <driver executable> mount
    • Same as before except no implicit attach call

Unmounter:

  • TearDown()/TearDownAt() calls <driver executable> unmount
    • Same as before except no implicit detach call

Attacher:

  • Attach() calls <driver executable> attach (if defined by executable)
    • New interface method calls existing flex executable method
  • WaitForAttach() calls <driver executable> waitforattach (if defined by executable, it should be if attach is defined)
    • New interface method calls new flex executable method
  • GetDeviceMountPath() calls <driver executable> getdevicemountpath (if defined by executable, it should be if attach is defined)
    • New interface method calls new flex executable method
  • MountDevice() calls <driver executable> mountdevice (if defined by executable)
    • New interface method calls new flex executable method

Detacher:

  • UnmountDevice() calls <driver executable> unmountdevice (if defined by executable)
    • New interface method calls new flex executable method
  • Detach() calls <driver executable> detach (if defined by executable)
    • New interface method calls existing flex executable method
  • WaitForDetach() calls <driver executable> waitfordetach (if defined by executable, it should be if attach is defined)
@saad-ali

saad-ali Jun 9, 2016

Member

No--Setup/SetupAt/TearDown/TearDownAt should remain since they are required for mounting and part of the Mounter interface which all plugins should implement. The only thing to do with them is to remove the attach calls from them and move those to the Attacher interface methods.

The MountDevice method, part of the Attacher interface, is designed so that volumes that require attachment, can have a two step mount. MountDevice mounts the device to a global mount path. And then Setup bind mounts the global mount path to the pod. It's up to plugin writers to decide exactly what they will do for each call.

So Flex should implement the following calls:

Mounter:

  • Setup()/SetUpAt() calls <driver executable> mount
    • Same as before except no implicit attach call

Unmounter:

  • TearDown()/TearDownAt() calls <driver executable> unmount
    • Same as before except no implicit detach call

Attacher:

  • Attach() calls <driver executable> attach (if defined by executable)
    • New interface method calls existing flex executable method
  • WaitForAttach() calls <driver executable> waitforattach (if defined by executable, it should be if attach is defined)
    • New interface method calls new flex executable method
  • GetDeviceMountPath() calls <driver executable> getdevicemountpath (if defined by executable, it should be if attach is defined)
    • New interface method calls new flex executable method
  • MountDevice() calls <driver executable> mountdevice (if defined by executable)
    • New interface method calls new flex executable method

Detacher:

  • UnmountDevice() calls <driver executable> unmountdevice (if defined by executable)
    • New interface method calls new flex executable method
  • Detach() calls <driver executable> detach (if defined by executable)
    • New interface method calls existing flex executable method
  • WaitForDetach() calls <driver executable> waitfordetach (if defined by executable, it should be if attach is defined)

This comment has been minimized.

@mcluseau

mcluseau Jun 9, 2016

Contributor

Thanks a lot Saad, this defines my next step: move the Attacher methods to the plugin, add the new methods, update the tests, remove Attach from SetUpAt.

@mcluseau

mcluseau Jun 9, 2016

Contributor

Thanks a lot Saad, this defines my next step: move the Attacher methods to the plugin, add the new methods, update the tests, remove Attach from SetUpAt.

This comment has been minimized.

@mcluseau

mcluseau Jun 10, 2016

Contributor

Ummm what will be the difference between mount and mountdevice in the executable? In the current code, SetUpAt (minus Attach/WaitForAttach) only calls MountDevice with the result of GetDeviceMountPath. Then, MountDevice currently calls mount from the executable or falls back to default mount logic if mount is not supported by the executable.

@mcluseau

mcluseau Jun 10, 2016

Contributor

Ummm what will be the difference between mount and mountdevice in the executable? In the current code, SetUpAt (minus Attach/WaitForAttach) only calls MountDevice with the result of GetDeviceMountPath. Then, MountDevice currently calls mount from the executable or falls back to default mount logic if mount is not supported by the executable.

This comment has been minimized.

@mcluseau

mcluseau Jun 10, 2016

Contributor

I can use a lock per volume on attach, and return the attach error (nil/error) to all callers. But from what Saad told me in #20262 (comment), the controller should be the one ensuring that guarantee.

@mcluseau

mcluseau Jun 10, 2016

Contributor

I can use a lock per volume on attach, and return the attach error (nil/error) to all callers. But from what Saad told me in #20262 (comment), the controller should be the one ensuring that guarantee.

This comment has been minimized.

@saad-ali

saad-ali Jun 10, 2016

Member

Right, the controller (and soon new volume manager) will ensure that the attach/detach/mount/unmount operations happen in the correct order and don't happen concurrently. Plugins are responsible for ensuring idempotency of attach/detach/mount/unmount operations.

@saad-ali

saad-ali Jun 10, 2016

Member

Right, the controller (and soon new volume manager) will ensure that the attach/detach/mount/unmount operations happen in the correct order and don't happen concurrently. Plugins are responsible for ensuring idempotency of attach/detach/mount/unmount operations.

This comment has been minimized.

@mcluseau

mcluseau Jun 10, 2016

Contributor

I joined the IRC channel just in case (as mcluseau).

@mcluseau

mcluseau Jun 10, 2016

Contributor

I joined the IRC channel just in case (as mcluseau).

This comment has been minimized.

@mcluseau

mcluseau Jun 10, 2016

Contributor

orrrr the (new) slack channel ;)

@mcluseau

mcluseau Jun 10, 2016

Contributor

orrrr the (new) slack channel ;)

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 10, 2016

Contributor

@MikaelCluseau @saad-ali, lets kill the pending attach call/invocation on timeout . That would take care of process leak on timeout.

@chakri-nelluri

chakri-nelluri Jun 10, 2016

Contributor

@MikaelCluseau @saad-ali, lets kill the pending attach call/invocation on timeout . That would take care of process leak on timeout.

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
if f.options == nil {
f.options = make(map[string]string)
func getVolumeSource(spec *volume.Spec) *api.FlexVolumeSource {

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Use this function instead of the Plugin.getVolumeSource from above.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

Use this function instead of the Plugin.getVolumeSource from above.

Show outdated Hide outdated pkg/volume/flexvolume/flexvolume.go
f.attachResultC = make(chan attachResult, 1)
}
go func() {
device, err := f.manager.attach(f, options)

This comment has been minimized.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

This one is a bit tricky. If the attach call misbehaves and get stuck we will keep leaking a go thread.

@chakri-nelluri

chakri-nelluri Jun 9, 2016

Contributor

This one is a bit tricky. If the attach call misbehaves and get stuck we will keep leaking a go thread.

This comment has been minimized.

@mcluseau

mcluseau Jun 9, 2016

Contributor

Yes that's really the big point. The goroutine leak would also be process leak here I think :-) This also requires that the Attach and WaitForAttach must be called on the same host/process, which is the case now (calling SetUpAt) but may not be true after the move to the plugin. Given how for instance the GCE volume plugin works, I could try to get the exec.Cmd object out of attach and watch it in WaitForAttach instead of using a chan+goroutine?

@mcluseau

mcluseau Jun 9, 2016

Contributor

Yes that's really the big point. The goroutine leak would also be process leak here I think :-) This also requires that the Attach and WaitForAttach must be called on the same host/process, which is the case now (calling SetUpAt) but may not be true after the move to the plugin. Given how for instance the GCE volume plugin works, I could try to get the exec.Cmd object out of attach and watch it in WaitForAttach instead of using a chan+goroutine?

@chakri-nelluri

This comment has been minimized.

Show comment
Hide comment
@chakri-nelluri

chakri-nelluri Aug 23, 2016

Contributor

Opened #31298 to figure out the mechanism to access secrets in attach & detach calls.

Contributor

chakri-nelluri commented Aug 23, 2016

Opened #31298 to figure out the mechanism to access secrets in attach & detach calls.

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Sep 5, 2016

GCE e2e build/test passed for commit 94b4935.

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Oct 5, 2016

Collaborator

This PR hasn't been active in 30 days. It will be closed in 59 days (Dec 4, 2016).

cc @MikaelCluseau @rootfs @saad-ali

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

Collaborator

k8s-merge-robot commented Oct 5, 2016

This PR hasn't been active in 30 days. It will be closed in 59 days (Dec 4, 2016).

cc @MikaelCluseau @rootfs @saad-ali

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA.

Once you've signed, please reply here (e.g. "I signed it!") and we'll verify. Thanks.


If you have questions or suggestions related to this bot's behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
Collaborator

k8s-ci-robot commented Nov 29, 2016

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://github.com/kubernetes/kubernetes/wiki/CLA-FAQ to sign the CLA.

Once you've signed, please reply here (e.g. "I signed it!") and we'll verify. Thanks.


If you have questions or suggestions related to this bot's behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GCI GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GCI GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins kops AWS e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kops aws e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins kops AWS e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kops aws e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins unit/integration failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot unit test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins unit/integration failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot unit test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GCI GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GCI GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins Kubemark GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kubemark e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins Kubemark GCE e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot kubemark e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GCE etcd3 e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gce etcd3 e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE etcd3 e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot gce etcd3 e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins CRI GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cri node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins CRI GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cri node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GKE smoke e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot cvm gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins GCE Node e2e failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot node e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-ci-robot

k8s-ci-robot Nov 29, 2016

Collaborator

Jenkins verification failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot verify test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Collaborator

k8s-ci-robot commented Nov 29, 2016

Jenkins verification failed for commit a4a7bae. Full PR test history.

The magic incantation to run this job again is @k8s-bot verify test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Dec 9, 2016

Collaborator

@MikaelCluseau PR needs rebase

Collaborator

k8s-merge-robot commented Dec 9, 2016

@MikaelCluseau PR needs rebase

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Jan 23, 2017

Collaborator

[APPROVALNOTIFIER] Needs approval from an approver in each of these OWNERS Files:

We suggest the following people:
cc @smarterclayton
You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

Collaborator

k8s-merge-robot commented Jan 23, 2017

[APPROVALNOTIFIER] Needs approval from an approver in each of these OWNERS Files:

We suggest the following people:
cc @smarterclayton
You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@mcluseau

This comment has been minimized.

Show comment
Hide comment
@mcluseau

mcluseau Feb 23, 2017

Contributor

Will be merged as part of #41804.

Contributor

mcluseau commented Feb 23, 2017

Will be merged as part of #41804.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment