Separate the pod template from replicationController #170

Open
bgrant0607 opened this Issue Jun 19, 2014 · 70 comments

Projects

None yet
@bgrant0607
Member

We should separate the pod template from replicationController, to make it possible to create pods from template without replicationController (e.g., for cron jobs, for deferred execution using hooks). This would also make updates cleaner.

@jbeda jbeda added the enhancement label Jun 19, 2014
@bgrant0607
Member

If we were to remove replicationController from the core apiserver into a separate service, I'd leave the pod template in the core.

@bgrant0607 bgrant0607 changed the title from Consider separating the pod template from replicationController to Separate the pod template from replicationController Jul 11, 2014
@erictune
Member

I'll take a shot at this.

@bgrant0607
@lavalamp
Please share any other thoughts on podTemplates.

Brian mentioned cron. This makes me think he wants to use podTemplates for delegation.
That is, is there some mechanism where principal A can define a /podTemplate, and then grant principal B permission to create /pods which derive from a certain /podTemplate, but which run as A. (I guess a replicationController is effectively "another principal" which A can delegate to?)

Will the delegation of power be part of the /podTemplate message, or will that be stored in some sideband ACL?
Does the PUT /pods method get extended to allow creating a /pod from a template instead of using the desiredState? Or is there a new non-REST method?

@lavalamp
Member

PUT /pods should only take pods, IMO. We should potentially offer something that "fills out" a pod template, but I think that should work like our current replication controller, which is an external component.

@bgrant0607
Member

The point of this proposal is to further narrow the responsibility and API of the replicationController to its bare essentials. The replicationController should just spawn new replicas.

Right now, essentially a full copy of the pod API is embedded in the replicationController API. As an external, independently versioned API, it would be challenging to keep synchronized with the core API. Additionally, with the replicationController creating pods by value rather than by reference, it needs to be delegated the authority to create ~arbitrary pods as ~arbitrary users (once we support multi-tenancy) -- this would mean it could do anything as anybody. This is even more of an issue once we introduce an auto-scaler layered on the replicationController.

It's very difficult to develop a signature-based approach layered on a literal pod creation API that could be made both usable and secure. OTOH, if the replicationController could only spawn instances from templates owned by the core, then it's power could be restricted. Think of this as the principle of least privilege and separation of concerns.

Including the template by reference in the replicationController API would also facilitate rollbacks to previous pod configurations. A standalone template could be used for cron and other forms of deferred execution.

Even if we were to add a more general templating/configuration-generation mechanism in the future, we can't have turtles all the way down. A pod template would be useful for spawning the config generator, among other things.

As with the current replicationController API, pods would have no relationship to the template from which they were generated other than their labels and any other provenance information we kept. Changes to the template would have no effect on pods created from it previously.

I'd be fine with a separate API endpoint for creating pods from a template, just as we have for replicationController today.

@erictune
Member

Okay, putting together above comments and my own thoughts...

For the initial PR, we just need to have a pod template type which unambiguously and completely defines a /pod. Later PRs can extend /podTemplate as needed to support authorization of delegated use.

Here are a few examples with delegation and the types of expansion of templates that might occur:
- cron service runs a /pod, but passes to pod's environment a string identifying the datecode for this run.
- third-party auto-scaler makes more of a pod, or makes pods that request more or less resources.
- map-reduce service makes several pods, setting environment variables that control input and output file paths.
- ABTester service makes pods with two different values for an Environment variable that controls a new feature, and two different values of another Environment variable that controls a tag added to the logs of these pods (e.g. experiment_27354_mode_a)

Considerations for podTemplate:

  1. Ease and succinctness of definition of podTemplate
  2. Ease of reasoning about the security implications of giving a user permission to instantiate pods from a podTemplate
  3. Work with YAML as well as json.
  4. allow templates to generate many different kinds of pods.
    Item 4 seems much less important than 1 and 2. Therefore, this rules out a podTemplate which holds a schema definition, or jpath expression, or anything else which allows fully general manipulation of json-type data. Item 3 above further reinforces this.

Therefore, a /podTemplate will look something like this:

{ "id": "awesomePodTemplate",
  "pod": "<object exactly following /pod schema>", 
   "allowExtraEnvVars": [
     "MOTD": 
       "Today's pod brought to you by a replication controller."],
   "allowModifiedResourcesRequestsAndLimits": 1,
   "delegatedPodMakers": ["alice@example.com", "replicationcontroller@kubernetes.io"],
}

Note the specific, capability-like descriptions of allowed modifications to the /pod object.

However, the first PR will just have:

{ "id": "myPodTemplate",
  "pod": "<object exactly following /pod schema>", 
}

The /pod schema will get a new member "actsAsUser". This affects which user the pod acts as.
Initially, this will have no affect. As we add authentication (#443), the following authorization code can be added to the apiserver:

if authenticatedUser == request.pod.actAsUser { return auth.Authorized }
return auth.notAuthorized

In a later PRs, the /pod schema will be extended to have a "fromPodTemplateId" member which references the id of the /podTemplate that this /pod is modeled on. This adds an interesting twist: we can't use the user-provided name alone to identify the /podTemplate. We need to specify which user's namespace the name lies in. Maybe "actAsUser" identifies this or maybe we need a globally unique id for a podTemplate.

With that member added, the authorization check for creating a /pod would look like this:

if authenticatedUser == request.pod.actAsUser { return Authorized }
if auth.Can(authenticatedUser, auth.MakePodsFor, request.pod.actAsUser) {
    tpl := findPodTemplate(request.fromPodTemplateId)
    if tpl != nil {
      if tpl.Generates(request.pod) {
         return auth.Authorized
      }
    }
  }
}
return auth.NotAuthorized
@erictune
Member

Other use case: pod's port can come from a range, to allow duplicate pods on the same host. Would this go in the template?

@lavalamp
Member

I wonder if we should maybe add an "owner" field to the JSONBase, so that all objects in the system could have an owning user. If so, no need to specifically add that field to the PodTemplate.

In a later PRs, the /pod schema will be extended to have a "fromPodTemplateId" member which references the id of the /podTemplate that this /pod is modeled on. This adds an interesting twist: we can't use the user-provided name alone to identify the /podTemplate.

This could be done with a label, which is what our current replicationController does.

I think a step that should come shortly after adding PodTemplate as a resource is changing the replication controller struct to take a podTemplateID instead of a hardcoded PodTemplate.

Port shouldn't be dynamic.

May want @brendanburns to take a look at this when he gets back.

@erictune
Member

On Thu, Jul 24, 2014 at 10:24 AM, Daniel Smith notifications@github.com
wrote:

I wonder if we should maybe add an "owner" field to the JSONBase, so that
all objects in the system could have an owning user. If so, no need to
specifically add that field to the PodTemplate.

In a later PRs, the /pod schema will be extended to have a
"fromPodTemplateId" member which references the id of the /podTemplate that
this /pod is modeled on. This adds an interesting twist: we can't use the
user-provided name alone to identify the /podTemplate.

This could be done with a label, which is what our current
replicationController does.

can a label selector select a different user's objects?

I think a step that should come shortly after adding PodTemplate as a
resource is changing the replication controller struct to take a
podTemplateID instead of a hardcoded PodTemplate.

Okay, but again the namespace/user issue is unresolved.

Port shouldn't be dynamic.

May want @brendanburns https://github.com/brendanburns to take a look
at this when he gets back.


Reply to this email directly or view it on GitHub
#170 (comment)
.

@erictune erictune self-assigned this Jul 24, 2014
@bgrant0607
Member

Thanks, @erictune .

First of all, while security is part of the motivation for this, I'd drop all user / identity / auth / delegation stuff until we figure out auth[nz] more generally. That said, we'll want to namespace label keys by project implicitly by default to prevent conflicts and overlap across users.

Second, we should leave out most/all forms of substitution and computation more generally. A more general config mechanism is a separate issue. I was thinking of take what replicationController supports today and moving it to a separate object, which we might want to garbage collect after some amount of time in the case that it hasn't been used.

However, I think it's not too early to think about the override model, and whether we want one eventually, even though we wouldn't implement it initially. Env. vars. and resources are good examples.

It would be useful to think about how splitting out the template (and overrides) would interact with updates driven by declarative configuration. Does the replicationController change to a new template, or does one update its template? How does one update pods controlled by the replicationController? Some ideas were discussed in #492 .

Duplicate pods on the same host: We implement IP per pod, so no port allocation range is necessary.

fromPodTemplateId: The template must behave as a cookie cutter -- once a pod is created from a template, it has no relationship to the template. The template may be changed or deleted without affecting pods created from it, and pods created from it may be modified independently. We probably do want to record provenance information for debugging and/or auditing, though. It would include information like the template id, time, replication controller id (if created by one), user, etc.

@smarterclayton
Contributor

@bgrant0607 Can you describe config generators a bit more as mentioned in 146? Haven't heard you mention that yet, but I suspect it matches use cases we are looking to solve as well.

@bgrant0607 bgrant0607 added this to the v1.0 milestone Aug 27, 2014
@bgrant0607
Member

Created #1007 to start the broader config discussion.

#503 contains another example that could use the pod template: job controller.

I'd like a bulk-creation operation to go with the pod template, so that a replication controller could send one operation to create N pods. This will eventually be important for performance, gang scheduling, usage analytics, etc.

@bgrant0607 bgrant0607 referenced this issue in smarterclayton/kubernetes Sep 12, 2014
@smarterclayton smarterclayton Proposal for v1beta3 API
* Separate metadata from objects
* Identify current state of objects consistently
* Introduce BoundPod(s) as distinct from Pod to represent pods
  scheduled onto a host
* Use "spec" instead of "state"
* Rename Minion -> Node
* Add UID and Annotations on Metadata
* Treat lists differently from resources
* Remove ContainerManifest
d695810
@bgrant0607
Member

@smarterclayton @erictune @lavalamp

Trying to make this concrete.

Standalone Pod Template

From #1225:

type PodTemplate struct {
    ObjectType `json:",inline" yaml:",inline"`
    Metadata   ObjectMetadata `json:"metadata,omitempty" yaml:"metadata,omitempty"`

    // Spec describes what a pod should look like.
    Spec PodSpec `json:"spec,omitempty" yaml:"spec,omitempty"`
}

It should also have a Status PodTemplateStatus, for consistency with all other API objects. I could imagine recording status data like timestamp of last pod created (e.g., if we wanted to put a TTL on template objects).

There is the question of whether we want metadata like labels and annotations to come from the template or to be provided at pod instantiation time or both. Taking metadata from the template is easier to use and more secure. Providing metadata at instantiation time would allow more flexible template reuse. I'm going to declare that flexible template is a problem for the config system, not the PodTemplate, so I recommend we should take metadata from the PodTemplate.

The Metadata in the struct above is the PodTemplate's metadata. Typically, the pods created from the template will have the same labels and at least some of the same annotations, but may have additional annotations, such as to record the template from which they were created. However, for cleanliness (and flexibility, also), I recommend a separate field for pod metadata, PodMetadata ObjectMetadata.

I could also foresee us adding more fields in the future, such as authorization info, TTL, etc.

Therefore, I propose a PodTemplateSpec, which includes PodMetadata, PodSpec, and whatever other desired state fields we need.

type PodTemplateSpec struct {
        // Metadata of the pods created from this template.
    Metadata   ObjectMetadata `json:"metadata,omitempty" yaml:"metadata,omitempty"`

    // Spec describes what a pod should look like.
    Spec PodSpec `json:"spec,omitempty" yaml:"spec,omitempty"`
}

type PodTemplate struct {
    ObjectType `json:",inline" yaml:",inline"`
    Metadata   ObjectMetadata `json:"metadata,omitempty" yaml:"metadata,omitempty"`

    // Spec describes what a pod should look like.
    Spec PodTemplateSpec `json:"spec,omitempty" yaml:"spec,omitempty"`

    // Status represents the current information about a PodTemplate. 
    Status PodTemplateStatus `json:"status,omitempty" yaml:"status,omitempty"`
}

Bulk Pod Creation

By themselves, PodTemplates don't do anything. They are there to be used. I propose to extend POST /pods with 2 URL parameters:

  1. number=<int>: The number of pods to create. When number > 1, the Name, if provided, is treated as a prefix, to which some uniquifying characters are appended for each pod. If Name is not provided, it is auto-generated according to our general approach to this (e.g., autosetName=true might be necessary).
  2. template=<reference to PodTemplate>: Take the pod's metadata and spec from the specified PodTemplate rather than from the json payload. If the specified PodTemplate doesn't exist, that's an error. The client should be able to use resourceVersion preconditions to ensure they're using a sufficiently up-to-date PodTemplate.

More about the format of object references below.

Replication Controller

Currently (even in the v1beta3 proposal), ReplicationControllerSpec contains an inline Template PodTemplate. It's awkward to nest a full-fledged object in another object, so at minimum this should be PodTemplateSpec instead.

There are 3 alternative approaches to using a PodTemplate in ReplicationController:

  1. Use a POST URL parameter template=<reference to PodTemplate>, similar to pod creation, which would be copied into the PodTemplateSpec in the ReplicationControllerSpec at creation time.
  2. Replace the inline PodTemplateSpec with a reference to a PodTemplate in the ReplicationControllerSpec.
  3. Support both (i.e., one of) the inline PodTemplateSpec and reference to a PodTemplate, the former for simplicity and the latter for all the other reasons we'd like to do this. We could also support (1) in this case.

In order to produce the decoupling and security properties I was looking for when I proposed this issue, the replication controller service needs to be able to utilize the template at pod creation time rather than at the time the replication controller is created. Therefore, the ReplicationControllerSpec needs a reference to the PodTemplate. This has the disadvantage of creating a hard dependency between two objects -- the replication controller could fail if its pod template were deleted -- but we could disallow deletion of PodTemplates that were in use. We already have another creation-order dependency -- services must be created before their clients -- so that wouldn't be a new issue.

I'm tempted to recommend (3), so we could support both simple and more sophisticated use cases, but (A) I'm concerned that inline PodTemplateSpecs in ReplicationControllerSpec will create problems down the road for auth and for API refactoring and (B) kubecfg could paper over the complexity of dealing with multiple objects for now and a Real Config solution or higher-level API should be able to deal with it later.

So, I recommend (2): PodTemplate by reference only.

Inter-object references

This could (and probably should) be forked into its own issue if there's a lot of debate.

We don't currently have any cross-references between objects in our API. We just have indirect references via label selectors.

Possible options:

  1. Label selector.
  2. UID.
  3. JSON of identifying metadata: Kind, Namespace, Name.
  4. Partial object URL (e.g., path only, or path only without version).
  5. Full object URL.
  6. All of the above.
  7. Something else?

Using label selectors would require adding a unique label to facilitate unique references, which is sort of contrary to what labels are for, or a non-label tie-breaking field to select the correct one from the set. Additionally, the consistency model would be more complex -- after adding a new template, users would want to ensure that the replication controller would use it before performing an action that would cause new pods to be created, such as killing pods or increasing the replica count. This seems overly complex for only a small benefit.

UID has the problems that it isn't even indexed currently, would be hard for users to reason about, and couldn't be specified without additional communication with the apiserver and processing in the client. In particular, it would be hostile to configuration.

JSON would require another encoding for URL parameters.

Therefore, I suggest consistency with API object references from outside the system, so either (4) or (5). The reason to not use (5) is because the domain name and version are not necessarily stable (esp. if we replicate and/or self-host apiserver), so I recommend (4), path without API version. This form would be used both in URL parameters (pod creation from template) and in object fields (replication controller). This form also happens to be the most concise.

@smarterclayton
Contributor

Re: references, I agree with reasoning about 1, 2, 3, and 5. I would also agree with 4 as better than the alternatives. One problem with 4 is when we rename resources (minions -> nodes).

Re: templates, Some potential problems that could crop up with referenced pods:

  1. How does the replication controller validate that the provided pod template is safe if it can't read the template? We've got a few cases of that - label selector of controller needs to select the pods it creates, and RestartPolicy=always is the only allowed type. Does the replication controller need a new API endpoint of GET /pods that it can use as a validating oracle (return 1 if this attribute matches this label selector)?

  2. If the problem is replication controllers being able to create arbitrary pods, couldn't we also address problem by having replication controller controllers (the code that creates pods) pass a reference to the controller resource (/resourceControllers/1), and have the /pods endpoint handle reading the template out of the replication controller?

    Admittedly that means that the pods endpoint has to be able to decompose a resourceControllers response, and access it, which opens the door to other forms of injection style attacks. But it could also mean that any object which has a field "Spec" with type "PodTemplateSpec" can clone pods (assuming that we have a general solution for doing a reference -> endpoint GET, which I'm not positive we're at yet).

@smarterclayton smarterclayton referenced this issue in lavalamp/kubernetes Sep 24, 2014
@lavalamp lavalamp Add new Event type; replaces previous Event type, which is too limited.
Remove writing of old event type.
ada1bc0
@bgrant0607
Member

I didn't intend that the replication controller couldn't read the template. I intended the reference to be valid across API version changes and apiserver relocations.

@smarterclayton
Contributor

Ah, got confused on that.

I'm really leery of 3 although I accept the complexity for auth argument. For client behavior... anyone who wants to use replication controllers or job controllers in any form would have to deal with the complexity, as will anyone who builds on top of replication controllers. It means we're adding to the ordering problems for config rather than keeping them limited (accepting that there will be Real Config in the future, I still might be willing to do a partial step towards pod templates rather than introduce ordering).

Some questions:

  • Do pod templates need to be immutable?
  • Creating a replication controller is a declaration of intent (want things that look like this) - if templates are not immutable, then doesn't introducing the reference relationship result in race / coupling problems that subvert or weaken my intent? I.e. changing a pod template can transparently affect other resources in the system in non-obvious ways.

If pod templates should be immutable, then can't we infer a template automatically from a hashing process on the template (either storing it via its hash identifier internally) or by reference where authorization is attached to hash templates that actually exist in the system? In that case the replication controller is constrained to creating a pod that has a matching hash to a resource in the system. That would probably still require the code that implements the API objects that utilize templates to register a template, but it could be done transparently to users.

@sym3tri
Contributor
sym3tri commented Sep 24, 2014
  1. Replace the inline PodTemplateSpec with a reference to a PodTemplate in the ReplicationControllerSpec.

+1

  1. Partial object URL (e.g., path only, or path only without version).

+1

@bgrant0607
Member

I assume the (3) you mean is supporting both inline templates and template references.

What we're designing here is the lowest-level primitives. We're going to want higher-level layers built on top of them. IMO, replication controller is already one of these higher-level layers, albeit an essential one. The idioms in kubecfg -- stop, resize, run, rollingupdate -- further demonstrate that we'll need to build higher-level layers in order to make the system usable by end users. For example, I could imagine something like Asgard managing replication controllers, auto-scalers, services, canaries, rolling updates, etc. I imagine that OpenShift will similarly manage the underlying resources. I wouldn't solve this problem using static configuration or fat client libraries.

We do already have ordering issues -- services must be created before the pods that connect to them. I agree we don't want to impose lots of ordering constraints, but some will be unavoidable.

Pod templates don't need to be immutable, though some people may use them that way. Immutable templates increase complexity for the configuration system and for client libraries by imposing a different object update protocol and requiring automatic name generation for each change. Object lifecycle can be a challenge with immutable templates, also.

I do see your point about races, however. The current rolling update procedure of updating the template and then killing pods one by one could suffer from that problem. I dislike this procedure, anyway, as pods could be updated unpredictably in the event of pod replaces due to host failure. However, if the replication controller were watching events, I believe it should see the update to the pod template before the termination of the pods.

Pod templates are like cookie cutters. Once a cookie has been cut, the cookie has no relationship to the cutter. No quantum entanglement. This approach radically simplifies system semantics and increases the flexibility of the primitives. Rolling updates already need to assess consistency with the new desired state and post-update service liveness, readiness, and, ideally, happiness (performance, errors, etc.).

@smarterclayton
Contributor

Ok, I'm provisionally ok with 2 with the desire maybe in the future to do 3. Structurally this means we need to define either PodTemplateReference or ComponentReference in v1beta3 to make the change. Will adapt lavalamps component reference from Events into that proposal.

@erictune erictune removed their assignment Sep 29, 2014
@bgrant0607 bgrant0607 modified the milestone: v0.8, v1.0 Oct 4, 2014
@smarterclayton smarterclayton self-assigned this Nov 6, 2014
@bgrant0607
Member

Assuming we still want to keep v1beta1/2 operational, I see the following options:

  1. Use separate internal API objects for v1beta1/2 and v1beta3, so both API versions would work but a user couldn't create a replication controller using one version and get using another.
  2. Support both inline and separate pod templates in the internal API ReplicationController object, but only surface the former in v1beta1/2 and the latter in v1beta3.
  3. Fully convert inline pod templates to separate objects for v1beta1/2.

I'd be fine with the first option, but the third option isn't as bad as one might think. kubectl describe will need the ability to pull together the controller and template specs. We'll likely (optionally) want to be able to delete the template when deleting the controller, too (in line with #1535). Of course, we'd need to work through the registry and other API logic in more detail.

@smarterclayton
Contributor

On Nov 5, 2014, at 9:46 PM, bgrant0607 notifications@github.com wrote:

Assuming we still want to keep v1beta1/2 operational, I see the following options:

  1. Use separate internal API objects for v1beta1/2 and v1beta3, so both API versions would work but a user couldn't create a replication controller using one version and get using another.
  2. Support both inline and separate pod templates in the internal API ReplicationController object, but only surface the former in v1beta1/2 and the latter in v1beta3.

I was going to do that this way to start - old objects will continue to store their template, new ones will read their ref. We want in some places to validate references at runtime so we can get that for free. It also makes it easier if we resolve the ref under the storage / client layer and let callers not have to worry about the difference.

Will see how it plays out.

  1. Fully convert inline pod templates to separate objects for v1beta1/2.

I'd be fine with the first option, but the third option isn't as bad as one might think. kubectl describe will need the ability to pull together the controller and template specs. We'll likely (optionally) want to be able to delete the template when deleting the controller, too (in line with #1535). Of course, we'd need to work through the registry and other API logic in more detail.


Reply to this email directly or view it on GitHub.

@lavalamp
Member
lavalamp commented Nov 6, 2014

PodTemplate should be a very simple object for apiserver-- should be able to use the generic rest thing I did for events directly.

@smarterclayton
Contributor

Yeah, thanks for reminding me about that.

On Nov 6, 2014, at 12:35 PM, Daniel Smith notifications@github.com wrote:

PodTemplate should be a very simple object for apiserver-- should be able to use the generic rest thing I did for events directly.


Reply to this email directly or view it on GitHub.

@smarterclayton
Contributor

After digging through a lot of this I'm starting to feel like we should allow setting template as a reference or an embedded object in v1beta3 on replication controllers (as opposed to just reference) - basically option 3 from #170 (comment)

  1. It's less impactful for clients in the short term to make the transition
  2. We have to support a variant of it internally until v1beta1 and v1beta2 are dropped (and we migrate people's storage off of them)
  3. The auth concerns are real - but we don't yet have an auth solution in place to handle pod templates securely yet (need some variant or robots + metadata on the pod template).

As a transitional step, if we supported the following attributes in v1beta3:

type ReplicationControllerSpec struct {
  ...
  Template ObjectReference
  TemplateSpec *PodTemplateSpec 
}

we could eventually drop TemplateSpec in v1. Specifying both would be prohibited via validation.

@bgrant0607
Member

SGTM

@bgrant0607
Member

FYI/FWIW, GCE finally added instance templates:
https://cloud.google.com/compute/docs/reference/latest/instanceTemplates

@smarterclayton
Contributor

Will start some of this next week. I anticipate the changes to the RESTStorage interface will be the most impactful (to allow additional data to flow through that API), and I want to have a discussion on how we want RESTStorage to evolve w.r.t. the generic registry and ideal swagger support.

@smarterclayton
Contributor

EDIT: changed somewhat to remove obvious conclusions

In order to make pod templates work in our current context, we need to discuss how references will work between RCs. In the short term, we can choose as a simplifying assumption that no cross-namespace reference is allowed betweens RCs and PTs.

The original proposal called out attaching policy (effectively) to a pod template in terms of who can use that PT. It hasn't really been reconciled with the evolution of the authz/n proposals (the idea that within a namespace, there could be even more granular policy). The problem with that policy is that it exists at the top of the API stack, not the bottom, so if the Pod API wishes to enforce "can the user who is invoking this actionr see this pod via policy", they have to go back up to the top of the API stack (authn -> authz -> route -> rest handler) and back down, OR they have to have access to directly invoke the policy (potentially bypassing additional checks in the API stack).

Choice 1:

  • Make access to create a pod from a pod template a decision of policy, not the pod template
  • Make access to create a pod from a pod template a decision of the pod template, not the policy
  • Both
  • Neither

Ancillary question: is policy an API endpoint that internal API components should be entitled to call in order to check permissions, or should our APIs make calls via a client interface that potentially go out through the stack.

Suggestion: In the short term, we can choose to not impose any access control on checks within the namespace, and not to define any permission behavior on the pod template itself, which probably will allow us to implement the important parts of the proposal.

Choice 2:

Next issue is how to indicate to the pods POST API that you wish to create a pod template (instead of a pod). The original goal of this proposal was to remove the need for an end user to be able to view a pod template to create it, so the naive step (retrieve pod template, post to pod endpoint) doesn't help much.

Passing a specific query parameter was suggested as the default. Another option is to expose an endpoint specifically for creating pod templates by reference - either as a resource POST /podFromPodTemplates or a sub URL POST /pods/<something>, or allow someone to POST a pod template reference object directly to /pods. The original proposal called for additional options to be available at template creation time (addtl. env vars) - I think it's important to note that if we are going to be passing structured data to the API as arguments, it's much better to craft a specific resource endpoint that can be independently versioned in order to achieve that (otherwise you must version every parameter). It also enables RESTful api documentation (like swagger) to clearly denote what the action (create) means for the resource (the template reference)

# query w/ example of complex struct
POST /pods?template=<name> {}
POST /pods?template=<name>&envvar.1.name=foo&envvar.2.name=bar {}

# separate object type
POST /podFromPodTemplates {
  "kind": "PodTemplateInstance",
  "metadata": {
    "name": "<name of pod template>",
    "namespace": "<implicit>",
  },
  "spec": {
    // params to parameterize the template instance
  }
}

# separate object type, same endpoint
POST /pods {
  "kind": "PodTemplateInstance",
  "metadata": {
    "name": "<name of pod template>",
    "namespace": "<implicit>",
  },
  "spec": {
    // params to parameterize the template instance
  }
}
@bgrant0607
Member

@smarterclayton Still digesting your comment (found it changed since the email update). Short summary: Yes, for now we should constrain to visibility within the same namespace and punt on finer-grain policies.

@bgrant0607
Member

Vaguely, the policy I'd like to be able to express is: allow the replication controller manager (running as some particular user/namespace) the ability to create pods using the pod templates matching this label selector (perhaps matching all pod templates by default).

@bgrant0607
Member

Creation of the pod template should be handled as a regular REST resource. I think there are 2 options that make sense:

  1. POST /podtemplates.
  2. POST /pods, with a field in the pod indicating that it should not be scheduled (say, suspend) -- we want that functionality for other reasons

(2) is appealing as a general pattern. In that case, that suggests to me a special verb on the object: POST /pods/footemplate/clone&suspend=false.

This would make pod templates not a special thing, which I like.

@bgrant0607
Member

ACK comment on URL parameters. Another motivation for a general solution to PATCH/PATCH-like behavior.

@bgrant0607
Member

I also want to finalize the form of cross-object references. I'm thinking more and more that we should use partial URLs, not the existing ObjectReference type. They're more RESTful, more meta-programmable, more forward-compatible with API changes, and more consistent with other API features/proposals, such as polymorphic verbs like resize (and clone). This is perhaps a little dangerous, since it presupposes that deep introspection of the URL structure is not required.

@smarterclayton
Contributor

One reason why URLs are more painful is if you want a template to have namespace local references. With ObjectReference it's:

Object {
  Name: "something"
  // no namespace set
}

With a URL it's

/api/v1beta1/ns/???/type/name

URL's don't seem to devolve to local references gracefully. Can be solved by a namespace rescoper, but the object certainly doesn't look normal (URLs are effectively opaque outside of your codebase).

----- Original Message -----

I also want to finalize the form of cross-object references. I'm thinking
more and more that we should use partial URLs, not the existing
ObjectReference type. They're more RESTful, more meta-programmable, more
forward-compatible with API changes, and more consistent with other API
features/proposals, such as polymorphic verbs like resize (and clone). This
is perhaps a little dangerous, since it presupposes that deep introspection
of the URL structure is not required.


Reply to this email directly or view it on GitHub:
#170 (comment)

@bgrant0607
Member

The same applies to version -- I'd assume by default it would make sense to use the same API version. Would URL suffixes work?

/api/v1beta3/ns/myns/pods/foo
/ns/myns/pods/foo
/pods/foo

This definitely breaks down URL opacity, but maybe that's ok, for at least URLs with recognized structure. We could also do a DNS-like resolution (at least conceptually) where we first try the URL, then prepend namespace, then prepend api verison.

@bgrant0607
Member

I'd be fine with sticking with ObjectReference for now in order to get more experience with it. We could potentially add a Link field to it in the future to experiment with the URL approach, also.

@smarterclayton
Contributor

I feel like URLs are more often the thing an API returns, vs things a user specifies (for precisely this reason). APIVersion is interesting because it's going to change, so you don't want to store that internally (so you don't have to migrate your Foo's when all their BarRefs drop support for v1betaX). Every enterprise system I've seen that stored URLs ended up having to do a URL mapping table to handle server renames as well (not that we would store those URLs).

Maybe we should focus on refs as URIs in effective status to simplify reading (traversing):

{
  "kind": "ReplicationController",
  "spec": {
    "templateRef": {
      "name": "foo"
    }
  },
  "status": {
    "templateLink": "https//server.com:9080/api/v1beta1/ns/myns/pods/foo"
  }
}

Client can either apply the template ref arguments themselves or follow the templateLink. That's pretty hateos to me.

----- Original Message -----

The same applies to version -- I'd assume by default it would make sense to
use the same API version. Would URL suffixes work?

/api/v1beta3/ns/myns/pods/foo
/ns/myns/pods/foo
/pods/foo

This definitely breaks down URL opacity, but maybe that's ok, for at least
URLs with recognized structure. We could also do a DNS-like resolution (at
least conceptually) where we first try the URL, then prepend namespace, then
prepend api verison.


Reply to this email directly or view it on GitHub:
#170 (comment)

@bgrant0607
Member

I like it.

@bgrant0607
Member

Note that the clone approach would make it easy to do 2/3 of what I described here: https://github.com/GoogleCloudPlatform/kubernetes/pull/3233/files#r22573202

Creating objects from other objects: pods from pods, replication controllers from pods.

@bgrant0607
Member
@smarterclayton
Contributor

Regarding 'clone' and #170 (comment), if we still wanted to have access control limitations on who can create from a pod templates that would be more difficult (the pod would have to have that field, as opposed to a separate pod template object). I assume the security consideration is still important, although we have not talked about it in a while. Having it be a separate resource 'podtemplates' with a distinct verb for instantiate is the easiest to secure in the current model (because you can limit replication controller manager to reading pod templates and using the 'clone' / 'instantiate' verb or separate resource name to limit them).

@bgrant0607
Member

Do we plan to put the access control rules in the objects themselves? I would have thought that would be separate ABAC rules.

@smarterclayton
Contributor

It was in the original proposal, however I think our discussions have lead us away from it. It does encourage us to model pod creation via template in a way that can be distinguished clearly from POST /pods.

On Feb 5, 2015, at 10:05 PM, Brian Grant notifications@github.com wrote:

Do we plan to put the access control rules in the objects themselves? I would have thought that would be separate ABAC rules.


Reply to this email directly or view it on GitHub.

@goltermann goltermann removed this from the v0.8 milestone Feb 6, 2015
@bgrant0607 bgrant0607 removed this from the v0.8 milestone Feb 6, 2015
@bgrant0607
Member

Continuing from #2726 (comment):

Using Content-Location is an interesting idea. Would we need an explicit clone resource, then? One could just POST with Content-Location pointed at an existing object. I guess if we want to be able to override any fields (which is tricky security-wise), we'd need somewhere to specify that.

@smarterclayton
Contributor

Yeah - I think that's interesting. I like that better than needing a resource in the short term - at worst we come along later and create a resource. Will have to think about how we apply abac to that.

On Feb 6, 2015, at 4:08 PM, Brian Grant notifications@github.com wrote:

Continuing from #2726 (comment):

Using Content-Location is an interesting idea. Would we need an explicit clone resource, then? One could just POST with Content-Location pointed at an existing object. I guess if we want to be able to override any fields (which is tricky security-wise), we'd need somewhere to specify that.


Reply to this email directly or view it on GitHub.

@bgrant0607
Member

More concretely, I think this would work by allowing creation of inert objects using either a metadata or spec field, which would default to enabled (that is, normal, active objects). This bit would be flippable via the control subresource discussed in #2726. The object would have to set generateName. The replication controller would then create new pods by POSTing to /pods with Content-Location, and would have to then enable the new pods via the control subresource.

I agree that applying abac to that sounds tricky.

My understanding of the alternative:

POST to a subresource, say /pods/mypodtemplate/clone. The response would contain a Location header with the path of the new resource (shouldn't we do that for all creations via POST? -- if so, we should add that to #2031). Making the inert pod to clone would work as above. This approach has some advantages:

  1. We have a place to hang field overrides. Initially, we could use that to auto-enable the new pods.
  2. It's easier to imagine how to write ABAC rules, since there's a distinct path.
@smarterclayton
Contributor

----- Original Message -----

More concretely, I think this would work by allowing creation of inert
objects using either a metadata or spec field, which would default to
enabled (that is, normal, active objects). This bit would be flippable via
the control subresource discussed in #2726. The object would have to set
generateName. The replication controller would then create new pods by
POSTing to /pods with Content-Location, and would have to then enable the
new pods via the control subresource.

I agree that applying abac to that sounds tricky.

Let me think about the ABAC implications more. The inert bit makes a lot of sense in general.

My understanding of the alternative:

POST to a subresource, say /pods/mypodtemplate/clone. The response would
contain a Location header with the path of the new resource (shouldn't we do
that for all creations via POST? -- if so, we should add that to #2031).

We should be setting the location header, but we don't necessarily have to do the redirect. Most people end up returning the result body of the POST because they have the object handy (the round trip to the DB or store usually returns the right state), so it's just a default optimization.

Making the inert pod to clone would work as above. This approach has some
advantages:

  1. We have a place to hang field overrides. Initially, we could use that to
    auto-enable the new pods.

Right - when I said another resource earlier I was imagining posting content-location to /clone, and we wouldn't need (on day one) to have a real resource to post. On the other hand, creating the real resource gives you a schema that shows up in the API. We could reserve content-location on the base resource as a shortcut way to do clone. I think eventually field overrides will become important on clone, so best to plan for that outcome.

  1. It's easier to imagine how to write ABAC rules, since there's a distinct
    path.

Reply to this email directly or view it on GitHub:
#170 (comment)

@smarterclayton
Contributor

One problem with using inert pods as templates is that they disappear into the press (if I'm using pods at scale I might have 10k pods and 2 inert pods), and to separate them for consumers you need to select a small subset of the pods (the 2). I think "what pod templates do I have" is not an uncommon question.

I'm not arguing inert or clone shouldn't exist, but if we do believe the data structure for a pod template may support other options in the future, why not model it that way now, and introduce an action that does a transformation?

Example:

POST /podtemplates {metadata: {}, spec: {template: {}, overrides: {}}}
# creates a new template

POST /podtemplates/foo/instantiate {overrides:{}} -> {/* pod */}
# instantiates a pod from the template
GET /podtemplates/foo/instantiate
# dry run for instantiate, although if you need to see the effect of overrides you'd need to do it on POST and add a param

Clone operates on an existing resource, instantiate converts a template resource into its analogue. Clone is always an operation on self, but instantiate is an adapter. I assume the two can coexist. If we have inert pods, then eventually people can use them like templates, but they will never have an alternate schema (can't easily define the set of allowed env overrides, for example).

So it seems it's abac convenience now (new type) vs not having to create a new resource type. It's probably slightly easier to create pod templates for 1.0 (don't have to carry inert implications throughout the system), but it assumes pod templates are a fundamental thing distinct from pods.

On Feb 6, 2015, at 4:49 PM, Brian Grant notifications@github.com wrote:

More concretely, I think this would work by allowing creation of inert objects using either a metadata or spec field, which would default to enabled (that is, normal, active objects). This bit would be flippable via the control subresource discussed in #2726. The object would have to set generateName. The replication controller would then create new pods by POSTing to /pods with Content-Location, and would have to then enable the new pods via the control subresource.

I agree that applying abac to that sounds tricky.

My understanding of the alternative:

POST to a subresource, say /pods/mypodtemplate/clone. The response would contain a Location header with the path of the new resource (shouldn't we do that for all creations via POST? -- if so, we should add that to #2031). Making the inert pod to clone would work as above. This approach has some advantages:

  1. We have a place to hang field overrides. Initially, we could use that to auto-enable the new pods.
  2. It's easier to imagine how to write ABAC rules, since there's a distinct path.


Reply to this email directly or view it on GitHub.

@bgrant0607
Member

I think that the signal vs. noise problem could be mitigated pretty easily with either field filtering (we'd need to introduce a new PodPhase, such as Inert or Disabled or somesuch) and/or label selection (using a by-convention template label).

If we wanted to clearly differentiate cloning and activation, we could create a custom operation to do that regardless of whether the object was a pod or pod template: POST /pods/footemplate/instantiate (or maybe spawn).

I think inert+clone has a natural synergy with finalization #3585: finalizers could fill in an inert object, after which it would either be activated or cloned.

I also think it enables pretty nice kick-the-tires scenarios, such as launching a pod, then creating a replication controller + template from it, though that could be enabled by client-side transformations, as well.

Admission control would have to run upon activation/clone, but that doesn't seem worse than invoking it on, say, updates to resource quantities.

I'm attracted to the elegance and flexibility of the inert+clone approach (similar to label selection vs. enclosing array of pods), but I'm willing to try one, the other, or both. Either would be better than embedding the template object in every controller, and it's admittedly hard to be certain which approach will be more natural to other people until they have something to play with.

@smarterclayton
Contributor

Do you see inert as generic to all resources (part of metadata, metadata.inert) or specific to some resources that expose it generically (part of spec, spec.inert)?

I have a hard time imagining that every resource can be inert, but it certainly seems common to all of the current Kube resources, and also to most of the Openshift resources I can think of (deployments, routes, policies).

Is inert only an initial state? For pods it seems yes, implying it cannot be toggled (since its part of pod phase which is one way). If someone wants a toggleable field I think that should be something else.

On Feb 7, 2015, at 12:42 PM, Brian Grant notifications@github.com wrote:

I think that the signal vs. noise problem could be mitigated pretty easily with either field filtering (we'd need to introduce a new PodPhase, such as Inert or Disabled or somesuch) and/or label selection (using a by-convention template label).

If we wanted to clearly differentiate cloning and activation, we could create a custom operation to do that regardless of whether the object was a pod or pod template: POST /pods/footemplate/instantiate (or maybe spawn).

I think inert+clone has a natural synergy with finalization #3585: finalizers could fill in an inert object, after which it would either be activated or cloned.

I also think it enables pretty nice kick-the-tires scenarios, such as launching a pod, then creating a replication controller + template from it, though that could be enabled by client-side transformations, as well.

Admission control would have to run upon activation/clone, but that doesn't seem worse than invoking it on, say, updates to resource quantities.

I'm attracted to the elegance and flexibility of the inert+clone approach (similar to label selection vs. enclosing array of pods), but I'm willing to try one, the other, or both. Either would be better than embedding the template object in every controller, and it's admittedly hard to be certain which approach will be more natural to other people until they have something to play with.


Reply to this email directly or view it on GitHub.

@smarterclayton
Contributor

To summarize the set of "copy a resource" use cases:

  • copy a resource to the local file system to recreate later
    1. GET
    2. POST (later)
  • revert an existing object to its older state
    1. GET
    2. copy resource version in client, overwrite spec
    3. PUT
  • copy a resource across namespaces
    1. GET
    2. change namespace field in client
    3. POST
  • copy a resource within a namespace
    1. GET
    2. change name field in client
    3. POST
  • create a resource that can be copied by other clients
    1. POST with inert and generate name
  • copy a previously templated resource
    1. POST /resource/clone
    2. (Resets inert)
  • convert a resource into another type via a transformation
    1. POST /resource/(spawn|instantiate) with resource representing transformation input

Agree on inert and finalizers.

On Feb 7, 2015, at 12:42 PM, Brian Grant notifications@github.com wrote:

I think that the signal vs. noise problem could be mitigated pretty easily with either field filtering (we'd need to introduce a new PodPhase, such as Inert or Disabled or somesuch) and/or label selection (using a by-convention template label).

If we wanted to clearly differentiate cloning and activation, we could create a custom operation to do that regardless of whether the object was a pod or pod template: POST /pods/footemplate/instantiate (or maybe spawn).

I think inert+clone has a natural synergy with finalization #3585: finalizers could fill in an inert object, after which it would either be activated or cloned.

I also think it enables pretty nice kick-the-tires scenarios, such as launching a pod, then creating a replication controller + template from it, though that could be enabled by client-side transformations, as well.

Admission control would have to run upon activation/clone, but that doesn't seem worse than invoking it on, say, updates to resource quantities.

I'm attracted to the elegance and flexibility of the inert+clone approach (similar to label selection vs. enclosing array of pods), but I'm willing to try one, the other, or both. Either would be better than embedding the template object in every controller, and it's admittedly hard to be certain which approach will be more natural to other people until they have something to play with.


Reply to this email directly or view it on GitHub.

@davidopp davidopp added the team/master label Feb 7, 2015
@bgrant0607
Member

I do see inert as being generic, common to all objects. It facilitates use of the system as a configuration store and message bus, as well as facilitating finalizers.

I do see it as an initial or indefinite (for templates) state only.

@bgrant0607
Member

The list of "copy" scenarios is useful.

I see the main value of clone as having special-case authorization rules, as opposed to an open-ended creation operation. Limited mutations, such as enabling the object, would be for convenience.

As for conversion/typecasting, the primary examples we have are:

  1. Replication controller from pod. Useful for kick-the-tires and bootstrapping.
  2. Service from replication controller (expose).

If we could easily convert a pod into a template we could do the first one. The second just requires extracting the labels or selector. I'm willing to rely on the client for these 2 cases for now.

@ghodss
Member
ghodss commented Feb 28, 2015

I will just add that while I do not have an opinion on whether inert is useful, a PodTemplate is a much easier concept for new users to grasp from a UX perspective. Telling new users "okay, first, create a pod, but it's kind of a fake pod, now create a replication controller that's going to create real pods from your fake pod" is really confusing. "Create a template, create a replication controller and point it at that template to create pods" is WAY easier to understand and is a much better onboarding experience. Also kubectl get pods shouldn't return templates but it probably should return "inert" pods that are fulfilling other use cases, but I know less about that.

I also don't mind having the option of pointing a replication controller at a pod to clone it, but I do think it's more confusing and should be an advanced use case if it exists at all.

@smarterclayton
Contributor

Mirrors my thinking right now. We can always alter pod templates in the future to point to inert pods via a migration.

On Feb 27, 2015, at 9:28 PM, Sam Ghods notifications@github.com wrote:

I will just add that while I do not have an opinion on whether inert is useful, a PodTemplate is a much easier concept for new users to grasp from a UX perspective. Telling new users "okay, first, create a pod, but it's kind of a fake pod, now create a replication controller that's going to create real pods from your fake pod" is really confusing. "Create a template, create a replication controller and point it at that template to create pods" is WAY easier to understand and is a much better onboarding experience. Also kubectl get pods shouldn't return templates but it probably should return "inert" pods that are fulfilling other use cases, but I know less about that.

I also don't mind having the option of pointing a replication controller at a pod to clone it, but I do think it's more confusing and should be an advanced use case if it exists at all.


Reply to this email directly or view it on GitHub.

@smarterclayton
Contributor

I started with pod templates as a separate resource. My reasoning was mostly short term:

  • Dividing pod templates into a separate resource right now is much easier to explain to end users (it's a template for a pod)
  • To properly handle inert pods, a good client would ideally present them differently (sorted, separated). Most clients that want to show a runtime view would filter them. They tend to disappear from any normal UI if the number of pods grows.
  • Templates can allow spec updates with a single set of rules - pods would need different update rules for inert vs not (which means swagger can't easily express it).
  • Inert requires slightly more changes to the rest of the codebase, such as replication controllers and services omitting them, scheduler waiting for the pod to not be inert.
  • Templates can expose other metadata beyond what pods expose.

I don't think starting with pod templates is incompatible with future introduction of inert pods - RCs should be able to target both pods and pod templates equally (they have to expose a clone/instantiate verb). In the future, it should be possible to migrate pod templates to inert pods, or expose inert pods as pod templates. It does seem like we need more handholding and clear delineating right now.

@bgrant0607
Member

I'm fine with pod templates.

@smarterclayton
Contributor

I'd like templates to be marked as immutable, which allows a client to communicate that a particular template should not be modified at runtime. The server would reject updates to that template if set. This would allow (for instance) an automated deployment tool to create an rc that cannot be tweaked, and force other clients to either delete and recreate the template (forcing uid to change) or to create a new template and point the rc at it.

@smarterclayton
Contributor

"To be able"

@bgrant0607
Member

We have some immutable templates internally. Some considerations:

  • As with rolling updates themselves, declarative updates to such templates will require special handling, since the objects will need to be re-created rather than updated. We'll want a way to specify the default update approach declaratively, with some default config in kubectl that maps kinds to update strategies.
  • A new name needs to be generated for each new "version" of the object, but it has to be deterministic rather than random. Hashing the object, or some subset of it, works, but can create new objects in unexpected scenarios, such as when the generator itself changes what it does. Also, after defaulting, the hash of the created object won't match the original hash, fwiw.
  • We'll likely also want to prevent "in use" objects from being deleted (e.g., the template currently used by a replication controller can't be deleted).
  • Since clients wouldn't be able to delete objects when they want to, we'll need to be able to automatically GC them once they are no longer used. Maybe this could be a 2-stage graceful termination example: put the object into "shutting down" mode, then put on a deletion TTL once it is no longer needed.
@smarterclayton
Contributor

----- Original Message -----

We have some immutable templates internally. Some considerations:

  • As with rolling updates themselves, declarative updates to such templates
    will require special handling, since the objects will need to be re-created
    rather than updated. We'll want a way to specify the default update approach
    declaratively, with some default config in kubectl that maps kinds to update
    strategies.
  • A new name needs to be generated for each new "version" of the object, but
    it has to be deterministic rather than random. Hashing the object, or some
    subset of it, works, but can create new objects in unexpected scenarios,
    such as when the generator itself changes what it does. Also, after
    defaulting, the hash of the created object won't match the original hash,
    fwiw.
  • We'll likely also want to prevent "in use" objects from being deleted
    (e.g., the template currently used by a replication controller can't be
    deleted).

Ugh, hadn't thought about that. Yuck :)

  • Since clients wouldn't be able to delete objects when they want to, we'll
    need to be able to automatically GC them once they are no longer used. Maybe
    this could be a 2-stage graceful termination example: put the object into
    "shutting down" mode, then put on a deletion TTL once it is no longer
    needed.
@bgrant0607
Member

Well, we don't have to prevent in-use deletion, or we could implement a best-effort check. It's similar to preventing replication controller overlap.

Assuming we don't 100% prevent in-use deletion, replication controller would need to do something reasonable, like report that it's broken in its status somehow.

@nikhiljindal
Member

/sub

Current status (correct me if I am wrong):

  • PodTemplate exist as separate resource.
  • APIServer exposes POST, GET, DELETE, etc operations on /podtemplates
  • PodTemplateSpec is currently inlined in ReplicationControllerSpec.

Remaining Work:

  • Allow ReplicationControllerSpec to include a reference to PodTemplate.
  • Add podtemplate support to kubectl
@JeanMertz

@nikhiljindal is your "current status" still correct?

Ie. I'd like to separate the RC template from the Pod template, but am unsure if this is even possible yet? This issue is still open, so I guess there's still something left to do?

@nikhiljindal
Member

Yes @JeanMertz the above current status and remaining work are still true.

@bgrant0607
Member

It looks like we're never going to do this. Security concerns will be addressed another way.

Pods created will need to match other policies (e.g., PodSecurityPolicy), but checking whether a pod "matches" a template is hard to do in general.

@bgrant0607 bgrant0607 closed this Apr 27, 2016
@smarterclayton
Contributor

Sad.

On Wed, Apr 27, 2016 at 4:36 PM, Brian Grant notifications@github.com
wrote:

It looks like we're never going to do this. Security concerns will be
addressed another way.

Pods created will need to match other policies (e.g., PodSecurityPolicy),
but checking whether a pod "matches" a template is hard to do in general.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#170 (comment)

@itajaja
itajaja commented Aug 22, 2016 edited

so it appears I can create pod templates, but If I cannot use them in pods, what's their use?

@smarterclayton
Contributor
@bgrant0607 bgrant0607 reopened this Dec 2, 2016
@kargakis
Member

so it appears I can create pod templates, but If I cannot use them in pods, what's their use?

DaemonSets and StatefulSets may use PodTemplates as a way of retaining history thus enabling rollbacks

@smarterclayton
Contributor
@bgrant0607 bgrant0607 added the triaged label Mar 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment