New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downward api volume plugin #5093

Merged
merged 1 commit into from Sep 2, 2015

Conversation

Projects
None yet
@sdminonne
Contributor

sdminonne commented Mar 5, 2015

This PR implements #560 especially what is discussed here
Dumping info in files is were discussed in #386 as well.

Needs rebasing after #8788 is merged

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Mar 5, 2015

Member

@thockin loves sidecars.

Member

erictune commented Mar 5, 2015

@thockin loves sidecars.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Mar 5, 2015

Member

@erictune This PR is hand-crafted to please @thockin

Member

pmorie commented Mar 5, 2015

@erictune This PR is hand-crafted to please @thockin

@sdminonne sdminonne changed the title from WIP for https://github.com/GoogleCloudPlatform/kubernetes/issues/560 to WIP: pod info sidecar Mar 5, 2015

+}
+
+// generateTmpFile writes input parameter 'values' in a temporary file. File descriptor
+// is returned. In case error is returned file descriptor is not meaningfull

This comment has been minimized.

@pmorie

pmorie Mar 5, 2015

Member

Nit: typo: meaningful

@pmorie

pmorie Mar 5, 2015

Member

Nit: typo: meaningful

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 7, 2015

Member

I do like this! Is this something we think we want to pursue? It seems viable to me...

Member

thockin commented Mar 7, 2015

I do like this! Is this something we think we want to pursue? It seems viable to me...

@bgrant0607 bgrant0607 referenced this pull request Mar 7, 2015

Closed

WIP: pod preconditions #4245

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Mar 9, 2015

Member

@thockin If it does the job, I think we can pursue it. One thing that has come up in discussion is concern about the load on the api server as you scale up usage of this sidecar in pods. There's been some concern over making the pod have to use a sidecar container for this in environments where every pod needs to consume this info. @sdminonne @EricMountain-1A do you want to add to that at all?

All that said, this seems like what we will want to go with if we want to move toward all user concerns that use resources happening in a user container. Either way you will have the same amount of disk IO, and if you make the platform provide some baked-in first-class support for this the complexity of the API required to express it starts to approach the API required to fully express a container (as you've mentioned before). I believe the only thing that would change is that you'd have one pod watch per node (once #4777 is implemented) instead of an additional watch per pod.

Any thoughts, @smarterclayton?

Member

pmorie commented Mar 9, 2015

@thockin If it does the job, I think we can pursue it. One thing that has come up in discussion is concern about the load on the api server as you scale up usage of this sidecar in pods. There's been some concern over making the pod have to use a sidecar container for this in environments where every pod needs to consume this info. @sdminonne @EricMountain-1A do you want to add to that at all?

All that said, this seems like what we will want to go with if we want to move toward all user concerns that use resources happening in a user container. Either way you will have the same amount of disk IO, and if you make the platform provide some baked-in first-class support for this the complexity of the API required to express it starts to approach the API required to fully express a container (as you've mentioned before). I believe the only thing that would change is that you'd have one pod watch per node (once #4777 is implemented) instead of an additional watch per pod.

Any thoughts, @smarterclayton?

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Mar 9, 2015

Contributor

I've a good discussion with @smarterclayton who clarified me some point about re-labeling process (more specifically the fact that a pod is going to be restarted automatically) and the expected load on the master. The latter was my major concern. As soon the load on the master is under control I'm ok with this mechanism (which I finalize anyway) and we may use it.

Contributor

sdminonne commented Mar 9, 2015

I've a good discussion with @smarterclayton who clarified me some point about re-labeling process (more specifically the fact that a pod is going to be restarted automatically) and the expected load on the master. The latter was my major concern. As soon the load on the master is under control I'm ok with this mechanism (which I finalize anyway) and we may use it.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 10, 2015

Member

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 10, 2015

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 10, 2015

Contributor

Since the kubelet has the labels, and already watches, why can't the pod talk to the kubelet to get this, maybe via the metadata service thing that Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub:
#5093 (comment)

Contributor

smarterclayton commented Mar 10, 2015

Since the kubelet has the labels, and already watches, why can't the pod talk to the kubelet to get this, maybe via the metadata service thing that Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server be
any different in every pod watches or if kubelet watches on behalf of every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to be
restarted automatically) and the expected load on the master. The latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub:
#5093 (comment)

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 10, 2015

Member

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 10, 2015

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 10, 2015

Contributor

You've proven I actually don't read what anyone else writes. /hangs-head

On Mar 10, 2015, at 11:40 AM, Tim Hockin notifications@github.com wrote:

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Mar 10, 2015

You've proven I actually don't read what anyone else writes. /hangs-head

On Mar 10, 2015, at 11:40 AM, Tim Hockin notifications@github.com wrote:

I think that is more or less what I said :) If you squint.

On Tue, Mar 10, 2015 at 8:25 AM, Clayton Coleman notifications@github.com
wrote:

Since the kubelet has the labels, and already watches, why can't the pod
talk to the kubelet to get this, maybe via the metadata service thing that
Rocket has proposed.

----- Original Message -----

Assuming this uses watch properly, how would the load on the API server
be
any different in every pod watches or if kubelet watches on behalf of
every
pod? Maybe kubelet could do a single watch for many pods?

If that is an issue, I bet we could do something like watch the kubelet's
API for your own pod, which kubelet would proxy into a group watch on the
API server.

On Mon, Mar 9, 2015 at 10:58 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

I've a good discussion with @smarterclayton
https://github.com/smarterclayton who clarified me some point about
re-labeling process (more specifically the fact that a pod is going to
be
restarted automatically) and the expected load on the master. The
latter
was my major concern. As soon the load on the master is under control
I'm ok with this mechanism (which I finalize anyway) and we may use it.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Mar 10, 2015

Member

How is the name of the pod supposed to get set?

This seems pretty heavyweight and opaque to me.

Member

bgrant0607 commented Mar 10, 2015

How is the name of the pod supposed to get set?

This seems pretty heavyweight and opaque to me.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Mar 10, 2015

Member

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?

Member

pmorie commented Mar 10, 2015

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Mar 12, 2015

Contributor

@thockin, @smarterclayton @pmorie sorry for the dealy on this.
I would be OK to watch kubelet and not api-server but to be honest I cannot answer to @bgrant0607's question.

Clues?

Contributor

sdminonne commented Mar 12, 2015

@thockin, @smarterclayton @pmorie sorry for the dealy on this.
I would be OK to watch kubelet and not api-server but to be honest I cannot answer to @bgrant0607's question.

Clues?

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 12, 2015

Contributor

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Mar 12, 2015

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 16, 2015

Member

Kubelet's API should be a reflection of the API server for things that the
kubelet knows about. I'm sure it's not there yet.

On Thu, Mar 12, 2015 at 7:49 AM, Clayton Coleman notifications@github.com
wrote:

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com
wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a
single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 16, 2015

Kubelet's API should be a reflection of the API server for things that the
kubelet knows about. I'm sure it's not there yet.

On Thu, Mar 12, 2015 at 7:49 AM, Clayton Coleman notifications@github.com
wrote:

Kubelet already has watch

On Mar 10, 2015, at 3:01 PM, Paul Morie notifications@github.com
wrote:

@thockin @smarterclayton Yep, I had had pod querying kubelet and a
single watch from kubelet in mind.

@sdminonne Any opinions on that?


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Mar 16, 2015

Contributor

@smarterclayton @thockin the problem I'm facing is the way to get the pod name. For the kubelet I can watch or poll

Contributor

sdminonne commented Mar 16, 2015

@smarterclayton @thockin the problem I'm facing is the way to get the pod name. For the kubelet I can watch or poll

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 16, 2015

Member

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 16, 2015

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 16, 2015

Contributor

I'll make it a priority to iterate the current pull request proposal. The upstream change to Docker to resolve env automatically was the latest wrinkle

On Mar 16, 2015, at 12:37 PM, Tim Hockin notifications@github.com wrote:

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Mar 16, 2015

I'll make it a priority to iterate the current pull request proposal. The upstream change to Docker to resolve env automatically was the latest wrinkle

On Mar 16, 2015, at 12:37 PM, Tim Hockin notifications@github.com wrote:

Yeah, we've been punting on exposing that information - Clayton doesn't
like the obvious approach (just slap on some env vars) and nobody has
properly proposed and implemented the alternative (pre-process some env var
values to be "special expansions")

On Mon, Mar 16, 2015 at 9:30 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@smarterclayton https://github.com/smarterclayton @thockin
https://github.com/thockin the problem I'm facing is the way to get the
pod name. For the kubelet I can watch or poll


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Mar 23, 2015

Member

@thockin @sdminonne @bgrant0607 @smarterclayton

I spoke to @sdminonne about this last week in person and have been thinking about it since: does this really need to be a side-car? It seems like it could easily be a volume plugin since the current incarnation really only needs to do a single projection (since pods are restarted when labels change).

@smarterclayton which PR were you referring to here?

I'll make it a priority to iterate the current pull request proposal. The upstream change to Docker to resolve env automatically was the latest wrinkle

Member

pmorie commented Mar 23, 2015

@thockin @sdminonne @bgrant0607 @smarterclayton

I spoke to @sdminonne about this last week in person and have been thinking about it since: does this really need to be a side-car? It seems like it could easily be a volume plugin since the current incarnation really only needs to do a single projection (since pods are restarted when labels change).

@smarterclayton which PR were you referring to here?

I'll make it a priority to iterate the current pull request proposal. The upstream change to Docker to resolve env automatically was the latest wrinkle

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 24, 2015

Member
  1. I hope that one day not too far from now, we will not kill pods just to
    update their labels.

  2. This could be a volume plugin - are we going to have volume plugins for
    every atom of metadata? Or a single one as per other discussions on
    downward API?

On Mon, Mar 23, 2015 at 7:01 AM, Paul Morie notifications@github.com
wrote:

@thockin https://github.com/thockin @sdminonne
https://github.com/sdminonne @bgrant0607 https://github.com/bgrant0607
@smarterclayton https://github.com/smarterclayton

I spoke to @sdminonne https://github.com/sdminonne about this last week
in person and have been thinking about it since: does this really need to
be a side-car? It seems like it could easily be a volume plugin since the
current incarnation really only needs to do a single projection (since pods
are restarted when labels change).

@smarterclayton https://github.com/smarterclayton which PR were you
referring to here?

I'll make it a priority to iterate the current pull request proposal. The
upstream change to Docker to resolve env automatically was the latest
wrinkle


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 24, 2015

  1. I hope that one day not too far from now, we will not kill pods just to
    update their labels.

  2. This could be a volume plugin - are we going to have volume plugins for
    every atom of metadata? Or a single one as per other discussions on
    downward API?

On Mon, Mar 23, 2015 at 7:01 AM, Paul Morie notifications@github.com
wrote:

@thockin https://github.com/thockin @sdminonne
https://github.com/sdminonne @bgrant0607 https://github.com/bgrant0607
@smarterclayton https://github.com/smarterclayton

I spoke to @sdminonne https://github.com/sdminonne about this last week
in person and have been thinking about it since: does this really need to
be a side-car? It seems like it could easily be a volume plugin since the
current incarnation really only needs to do a single projection (since pods
are restarted when labels change).

@smarterclayton https://github.com/smarterclayton which PR were you
referring to here?

I'll make it a priority to iterate the current pull request proposal. The
upstream change to Docker to resolve env automatically was the latest
wrinkle


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Mar 24, 2015

Contributor

@thockin @pmorie I'm OK to transform this in a volume plugin (even if I need a pointer to some code/doc to start from). And I'm OK also to handle all the metadata.

Contributor

sdminonne commented Mar 24, 2015

@thockin @pmorie I'm OK to transform this in a volume plugin (even if I need a pointer to some code/doc to start from). And I'm OK also to handle all the metadata.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 24, 2015

Member

Before we really make this a volume (which is pretty easy), I want to get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie https://github.com/pmorie
I'm OK to transform this in a volume plugin (even if I need a pointer to
some code/doc to start from). And I'm OK also to handle all the metadata.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 24, 2015

Before we really make this a volume (which is pretty easy), I want to get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie https://github.com/pmorie
I'm OK to transform this in a volume plugin (even if I need a pointer to
some code/doc to start from). And I'm OK also to handle all the metadata.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 24, 2015

Contributor

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates should only be "forced" to be applied on creation of a pod. All other behaviors are undefined. A rolling update is how you update secrets, otherwise, write a mechanism that talks to the api server and reads your secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie https://github.com/pmorie
I'm OK to transform this in a volume plugin (even if I need a pointer to
some code/doc to start from). And I'm OK also to handle all the metadata.


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub:
#5093 (comment)

Contributor

smarterclayton commented Mar 24, 2015

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates should only be "forced" to be applied on creation of a pod. All other behaviors are undefined. A rolling update is how you update secrets, otherwise, write a mechanism that talks to the api server and reads your secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie https://github.com/pmorie
I'm OK to transform this in a volume plugin (even if I need a pointer to
some code/doc to start from). And I'm OK also to handle all the metadata.


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub:
#5093 (comment)

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 24, 2015

Member

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman notifications@github.com
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 24, 2015

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman notifications@github.com
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Mar 25, 2015

Contributor

What if I don't want that behavior? I need to change my code at the same time? Automatic atomic update seems opt-in for safety, not opt out. Most users are going to need to restart anyway, so it seems nice, but not required.

On Mar 24, 2015, at 2:08 PM, Tim Hockin notifications@github.com wrote:

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman notifications@github.com
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

Contributor

smarterclayton commented Mar 25, 2015

What if I don't want that behavior? I need to change my code at the same time? Automatic atomic update seems opt-in for safety, not opt out. Most users are going to need to restart anyway, so it seems nice, but not required.

On Mar 24, 2015, at 2:08 PM, Tim Hockin notifications@github.com wrote:

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman notifications@github.com
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple things

  1. is it OK to have $some_kubelet_abstraction that runs periodically (or
    continuously) on behalf of user pods? Secret-volume updates, label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what atomic
updates look like (e.g. secrets as implemented can not be atomic) @pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets +
    volumes + whatever new ideas people come up with? Just one? Or maybe
    Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<
#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
#5093 (comment)
.


Reply to this email directly or view it on GitHub.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Mar 25, 2015

Member

Be it opt-in or opt-out, I don't care. I admit that secrets might be
(might be!) special, but I find it unsatisfying that some forms of these
write-files-for-you volumes are done "in kernel" (kubelet) and some are
done "in userspace", when the only difference is whether it gets refreshed
or not.
On Mar 24, 2015 5:00 PM, "Clayton Coleman" notifications@github.com wrote:

What if I don't want that behavior? I need to change my code at the same
time? Automatic atomic update seems opt-in for safety, not opt out. Most
users are going to need to restart anyway, so it seems nice, but not
required.

On Mar 24, 2015, at 2:08 PM, Tim Hockin notifications@github.com
wrote:

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman <
notifications@github.com>
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want
to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple
things

  1. is it OK to have $some_kubelet_abstraction that runs periodically
    (or
    continuously) on behalf of user pods? Secret-volume updates,
    label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads
your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what
atomic
updates look like (e.g. secrets as implemented can not be atomic)
@pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets

volumes + whatever new ideas people come up with? Just one? Or maybe
Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a
pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<

#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
<
https://github.com/GoogleCloudPlatform/kubernetes/pull/5093#issuecomment-85601670>

.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

Member

thockin commented Mar 25, 2015

Be it opt-in or opt-out, I don't care. I admit that secrets might be
(might be!) special, but I find it unsatisfying that some forms of these
write-files-for-you volumes are done "in kernel" (kubelet) and some are
done "in userspace", when the only difference is whether it gets refreshed
or not.
On Mar 24, 2015 5:00 PM, "Clayton Coleman" notifications@github.com wrote:

What if I don't want that behavior? I need to change my code at the same
time? Automatic atomic update seems opt-in for safety, not opt out. Most
users are going to need to restart anyway, so it seems nice, but not
required.

On Mar 24, 2015, at 2:08 PM, Tim Hockin notifications@github.com
wrote:

Why? If we decide we want to offer live volumes-as-a-service, why NOT
update secrets (if it can be done atomically)?

On Tue, Mar 24, 2015 at 10:08 AM, Clayton Coleman <
notifications@github.com>
wrote:

----- Original Message -----

Before we really make this a volume (which is pretty easy), I want
to
get a
decision made.

@bgrant0607 @erictune - we've gone back and forth about a couple
things

  1. is it OK to have $some_kubelet_abstraction that runs periodically
    (or
    continuously) on behalf of user pods? Secret-volume updates,
    label-volume
    updates, etc. My main concern is resource accounting and multi-pod
    isolation

At the last f2f Eric and I argued that by definitions secret updates
should only be "forced" to be applied on creation of a pod. All other
behaviors are undefined. A rolling update is how you update secrets,
otherwise, write a mechanism that talks to the api server and reads
your
secrets via Watch or use etcd.

1a) if YES to (1), we should establish patterns and rules for what
atomic
updates look like (e.g. secrets as implemented can not be atomic)
@pmorie

  1. how many "downard facing API" volume plugins do we want? Secrets

volumes + whatever new ideas people come up with? Just one? Or maybe
Secrets + one for everything else?

On Tue, Mar 24, 2015 at 1:51 AM, Salvatore Dario Minonne <
notifications@github.com> wrote:

@thockin https://github.com/thockin @pmorie <
https://github.com/pmorie>
I'm OK to transform this in a volume plugin (even if I need a
pointer
to
some code/doc to start from). And I'm OK also to handle all the
metadata.


Reply to this email directly or view it on GitHub
<

#5093 (comment)

.


Reply to this email directly or view it on GitHub:

#5093 (comment)


Reply to this email directly or view it on GitHub
<
https://github.com/GoogleCloudPlatform/kubernetes/pull/5093#issuecomment-85601670>

.


Reply to this email directly or view it on GitHub.


Reply to this email directly or view it on GitHub
#5093 (comment)
.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Mar 25, 2015

Member

Before I forget, let me point out:

write a mechanism that talks to the api server and reads your secrets via Watch or use etcd.

I have spoken to a couple of people who want to do exactly this.

Member

pmorie commented Mar 25, 2015

Before I forget, let me point out:

write a mechanism that talks to the api server and reads your secrets via Watch or use etcd.

I have spoken to a couple of people who want to do exactly this.

@bgrant0607

This comment has been minimized.

Show comment
Hide comment
@bgrant0607

bgrant0607 Mar 28, 2015

Member

I can't imagine the resource cost is very high, compared to everything else that Kubelet, the kube-proxy, and Docker are doing on behalf of containers. I don't think that's a justification to not provide the service.

I could get behind a volume plugin and/or an http-based metadata service exposed into the containers on a link-local address.

I don't think we need an arbitrary number of volume plugins. I would like to see a generic mechanism for populating a volume from a container #831. Secrets are needed to bootstrap such a mechanism.

Yes, such a volume container could then contact apiserver and pull API-specific data and/or generic secret-like config objects. However, I wouldn't object to an integrated mechanism given that the data/APIs would be Kubernetes-specific (as opposed to, say, pulling from github) and the kubelet needs to contact apiserver for other info, anyway.

As for updating secrets, I have to think that some people will want that.

Member

bgrant0607 commented Mar 28, 2015

I can't imagine the resource cost is very high, compared to everything else that Kubelet, the kube-proxy, and Docker are doing on behalf of containers. I don't think that's a justification to not provide the service.

I could get behind a volume plugin and/or an http-based metadata service exposed into the containers on a link-local address.

I don't think we need an arbitrary number of volume plugins. I would like to see a generic mechanism for populating a volume from a container #831. Secrets are needed to bootstrap such a mechanism.

Yes, such a volume container could then contact apiserver and pull API-specific data and/or generic secret-like config objects. However, I wouldn't object to an integrated mechanism given that the data/APIs would be Kubernetes-specific (as opposed to, say, pulling from github) and the kubelet needs to contact apiserver for other info, anyway.

As for updating secrets, I have to think that some people will want that.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Apr 10, 2015

Member

What's the status on this?

Member

thockin commented Apr 10, 2015

What's the status on this?

pkg/volume/downwardapi/downwardapi.go
+ pod: pod,
+ podUID: pod.UID,
+ plugin: plugin,
+ mounter: mounter}

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

last } should be on a new line

@thockin

thockin Aug 28, 2015

Member

last } should be on a new line

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Aug 28, 2015

Member

@sdminonne Jenkins seems to be seriously flaky ATM. In the last e2e run:

[Fail] Services [It] should serve a basic endpoint from pods 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1147
Member

pmorie commented Aug 28, 2015

@sdminonne Jenkins seems to be seriously flaky ATM. In the last e2e run:

[Fail] Services [It] should serve a basic endpoint from pods 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1147
pkg/volume/downwardapi/downwardapi.go
+
+func (b *downwardAPIVolumeBuilder) SetUpAt(dir string) error {
+ glog.V(3).Infof("Setting up a downwardAPI volume %v for pod %v/%v at %v", b.volName, b.pod.Namespace, b.pod.Name, dir)
+ // Wrap EmptyDir,let it do the setup.

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

Comment that this relies on the idempotency of the wrapped plugin to avoid repeatedly mounting.

@thockin

thockin Aug 28, 2015

Member

Comment that this relies on the idempotency of the wrapped plugin to avoid repeatedly mounting.

This comment has been minimized.

@pmorie

pmorie Aug 28, 2015

Member

Should we have an equivalent comment about this in the other plugins that are derived from emptyDir?

@pmorie

pmorie Aug 28, 2015

Member

Should we have an equivalent comment about this in the other plugins that are derived from emptyDir?

This comment has been minimized.

@thockin

thockin Aug 31, 2015

Member

I'm generally in favor of more comments, but it seems particularly needed here, where idempotency is explicitly waived

@thockin

thockin Aug 31, 2015

Member

I'm generally in favor of more comments, but it seems particularly needed here, where idempotency is explicitly waived

+}
+
+const (
+ downwardAPIDir = "..downwardapi"

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

Comment on the crazy names? Something like "It seems reasonable to allow dot-files in the config, so we reserved double-dot-files for the implementation".

@thockin

thockin Aug 28, 2015

Member

Comment on the crazy names? Something like "It seems reasonable to allow dot-files in the config, so we reserved double-dot-files for the implementation".

pkg/volume/downwardapi/downwardapi.go
+
+// writeData writes requested downwardAPI in specified files.
+//
+// The file visible in this volume are symlinks to files in the '.downwardapi'

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

s/.downwardapi/..downwardapi/

@thockin

thockin Aug 28, 2015

Member

s/.downwardapi/..downwardapi/

pkg/volume/downwardapi/downwardapi.go
+//
+// The file visible in this volume are symlinks to files in the '.downwardapi'
+// directory. Actual files are stored in an hidden timestamped directory which is
+// symlinked to by '.downwardapi'. The timestamped directory and '.downwardapi' symlink

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

same

+// <volume-dir>/podName
+// This structure is created:
+// <volume-dir>/podName -> ..downwardapi/podName
+// <volume-dir>/user_space/labels -> ../..downwardapi/user_space/labels

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

why are these two ../..downwardapi ?

@thockin

thockin Aug 28, 2015

Member

why are these two ../..downwardapi ?

This comment has been minimized.

@pmorie

pmorie Aug 28, 2015

Member

Those are the relative links for files in sub directories... maybe we can add a note about that @sdminonne

@pmorie

pmorie Aug 28, 2015

Member

Those are the relative links for files in sub directories... maybe we can add a note about that @sdminonne

This comment has been minimized.

@thockin

thockin Aug 31, 2015

Member

ah, right, yeah, it's always been confusing to me,

@thockin

thockin Aug 31, 2015

Member

ah, right, yeah, it's always been confusing to me,

+// For example for files: "bar", "foo/bar", "baz/bar", "foo/baz/blah"
+// the following symlinks and subdirectory are created:
+// bar -> ..downwardapi/bar
+// baz/bar -> ../..downwardapi/baz/bar

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

also ../..downwardapi here

@thockin

thockin Aug 28, 2015

Member

also ../..downwardapi here

+}
+
+func CleanEverything(plugin volume.VolumePlugin, testVolumeName, volumePath string, testPodUID types.UID, t *testing.T) {
+

This comment has been minimized.

@thockin

thockin Aug 28, 2015

Member

dead line

@thockin

thockin Aug 28, 2015

Member

dead line

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Aug 28, 2015

Member

Mostly LGTM - just a handful of nits, comments, etc

Member

thockin commented Aug 28, 2015

Mostly LGTM - just a handful of nits, comments, etc

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Aug 28, 2015

Member

@k8s-bot test this please

Member

thockin commented Aug 28, 2015

@k8s-bot test this please

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 28, 2015

GCE e2e build/test passed for commit 3692813.

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Aug 28, 2015

Member

@sdminonne Looks like it was a flake ;)

Member

pmorie commented Aug 28, 2015

@sdminonne Looks like it was a flake ;)

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 31, 2015

GCE e2e build/test passed for commit 6852cbd.

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Aug 31, 2015

Member

validation seems to be the last nit

Member

thockin commented Aug 31, 2015

validation seems to be the last nit

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 31, 2015

GCE e2e build/test passed for commit ec14124.

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Aug 31, 2015

Contributor

@thockin @pmorie squashed and rebased

Contributor

sdminonne commented Aug 31, 2015

@thockin @pmorie squashed and rebased

+ allErrs = append(allErrs, errs.NewFieldRequired("path"))
+ }
+ if path.IsAbs(downwardAPIVolumeFile.Path) {
+ allErrs = append(allErrs, errs.NewFieldForbidden("path", "must not be an absolute path"))

This comment has been minimized.

@thockin

thockin Aug 31, 2015

Member

I'm OK with this as forbidden

@thockin

thockin Aug 31, 2015

Member

I'm OK with this as forbidden

pkg/api/validation/validation.go
+ }
+ }
+ if strings.HasPrefix(items[0], "..") && len(items[0]) > 2 {
+ allErrs = append(allErrs, errs.NewFieldForbidden("path", "must not start with \"..\"."))

This comment has been minimized.

@thockin

thockin Aug 31, 2015

Member

this should be Invalid

@thockin

thockin Aug 31, 2015

Member

this should be Invalid

This comment has been minimized.

@sdminonne

sdminonne Sep 1, 2015

Contributor

ok, done

@sdminonne

sdminonne Sep 1, 2015

Contributor

ok, done

+ items := strings.Split(downwardAPIVolumeFile.Path, string(os.PathSeparator))
+ for _, item := range items {
+ if item == ".." {
+ allErrs = append(allErrs, errs.NewFieldInvalid("path", downwardAPIVolumeFile.Path, "must not contain \"..\"."))

This comment has been minimized.

@thockin

thockin Aug 31, 2015

Member

this is the one I could go either way on

@thockin

thockin Aug 31, 2015

Member

this is the one I could go either way on

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Aug 31, 2015

Member

Sorry, a bit of imprecision on the last go-around.

Member

thockin commented Aug 31, 2015

Sorry, a bit of imprecision on the last go-around.

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Aug 31, 2015

GCE e2e build/test failed for commit b6255d6.

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Sep 1, 2015

Contributor

@thockin many apologizes for the misunderstanding.
I've already squashed but I've modified only validation(_test)?.go.

Contributor

sdminonne commented Sep 1, 2015

@thockin many apologizes for the misunderstanding.
I've already squashed but I've modified only validation(_test)?.go.

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Sep 1, 2015

GCE e2e build/test passed for commit fc73855.

docs/user-guide/downward-api.md
+
+## Example
+
+This is an example of a pod that consumes its labels and annotations via the downward API volume, labels and annotations ared dumped rispectively in `/etc/podlabels` and in `/etc/annotations` files:

This comment has been minimized.

@pmorie

pmorie Sep 1, 2015

Member
are dumped in `/etc/podlabels` and `/etc/annotations`, respectively:
@pmorie

pmorie Sep 1, 2015

Member
are dumped in `/etc/podlabels` and `/etc/annotations`, respectively:

This comment has been minimized.

@sdminonne

sdminonne Sep 1, 2015

Contributor

@pmorie thanks

@sdminonne

sdminonne Sep 1, 2015

Contributor

@pmorie thanks

This comment has been minimized.

@sdminonne

sdminonne Sep 1, 2015

Contributor

Fixed and squashed

@sdminonne

sdminonne Sep 1, 2015

Contributor

Fixed and squashed

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Sep 1, 2015

GCE e2e build/test failed for commit 0eae3e3.

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Sep 1, 2015

Contributor

the fail is really strange just modified the docs... :)

Contributor

sdminonne commented Sep 1, 2015

the fail is really strange just modified the docs... :)

@k8s-bot

This comment has been minimized.

Show comment
Hide comment

k8s-bot commented Sep 1, 2015

GCE e2e build/test passed for commit f4dc065.

@thockin thockin added the lgtm label Sep 2, 2015

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Sep 2, 2015

Member

L G T M

Member

thockin commented Sep 2, 2015

L G T M

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Sep 2, 2015

Contributor

@thockin thanks :)

Contributor

sdminonne commented Sep 2, 2015

@thockin thanks :)

@pmorie

This comment has been minimized.

Show comment
Hide comment
@pmorie

pmorie Sep 2, 2015

Member

@sdminonne o/\o

Member

pmorie commented Sep 2, 2015

@sdminonne o/\o

@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Sep 2, 2015

Contributor

@pmorie thanks to you (obviously) too

Contributor

sdminonne commented Sep 2, 2015

@pmorie thanks to you (obviously) too

@brendandburns

This comment has been minimized.

Show comment
Hide comment
@brendandburns

brendandburns Sep 2, 2015

Contributor

@sdminonne Sorry this has to be rebased...

I promise we will then immediately merge...

Contributor

brendandburns commented Sep 2, 2015

@sdminonne Sorry this has to be rebased...

I promise we will then immediately merge...

@brendandburns

This comment has been minimized.

Show comment
Hide comment
@brendandburns

brendandburns Sep 2, 2015

Contributor

nm, it's generated code errors, I'll do it manually.

Contributor

brendandburns commented Sep 2, 2015

nm, it's generated code errors, I'll do it manually.

@brendandburns brendandburns merged commit f4dc065 into kubernetes:master Sep 2, 2015

4 checks passed

Jenkins GCE e2e 148 tests run, 63 skipped, 0 failed.
Details
Shippable Shippable builds completed. 3820/3840 tests passed.
Details
cla/google All necessary CLAs are signed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@sdminonne

This comment has been minimized.

Show comment
Hide comment
@sdminonne

sdminonne Sep 2, 2015

Contributor

@brendandburns many thanks!

Contributor

sdminonne commented Sep 2, 2015

@brendandburns many thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment