Skip to content
This repository has been archived by the owner on Jun 14, 2018. It is now read-only.

Attaching service tags (versions) to network endpoints #33

Closed
kyessenov opened this issue Jan 10, 2017 · 7 comments
Closed

Attaching service tags (versions) to network endpoints #33

kyessenov opened this issue Jan 10, 2017 · 7 comments

Comments

@kyessenov
Copy link
Contributor

kyessenov commented Jan 10, 2017

Kubernetes API provides two basic discovery methods: get all methods and get all network endpoints. For Istio, we introduce the concept of a service tag (used to call versions) to provide finer-grained routing in Envoy (e.g. partitioning service endpoints between A/B for A/B testing).

How do we represent this tag information in the Kubernetes API?
Amalgam8 uses pod labels and performs an inverse lookup from endpoint IP to pod spec. This is a fine approach, but I'm wondering if that's sufficient. Using pod labels has drawbacks:

  • requires deployments to be aware of these labels or pod labels get overwritten
  • not clear if we can change pod tags dynamically without restarting the pod (and not causing replicasets or whatever created the pod to kill it)
  • all ports for the same network endpoint carry the same tags

In the future, we want to use service tags to implement dynamic config update without pod restarts. So perhaps, an approach where we use both pod labels as well as some other registry would be the actual solution.

@rshriram
Copy link
Member

rshriram commented Jan 10, 2017 via email

@kyessenov
Copy link
Contributor Author

This is what I mean by 1):
Here is a typical case for a pod deployment. Let's say I'm upgrading my container image from v1 to v2. The deployment has a spec for a pod. This spec should have a field "version" and value v1 in the first iteration and value v2 for the second. Istio then should identify two service tags "version=v1" and "version=v2". The only way to control the service tag is using this Deployment: I cannot change an individual pod assignment from v1 to v2 (Deployment would kill it due to a spec change). I cannot create a new version v3 without modifying Deployment. So there'd be tight coupling between Deployments and service tags.

@kyessenov
Copy link
Contributor Author

We are going with pod label selectors as the source of truth for the service tag information.
The reason is the interoperability with kubernetes deployments/replica sets.

There are two items left to work out:

@ayj
Copy link
Contributor

ayj commented Jan 27, 2017

Forgive me if this was already resolved elsewhere, but if nothing else I wanted to follow-up for my own understanding.

Doesn't A/B, canary, blue/green, etc. imply the creation of a new pod with new image? Why wouldn't this pods be created with the new version label vs. modifying existing labels.

The two deployment method seems to be a well established pattern as described by canary-deployments. Any reason this pattern wouldn't work for A/B, blue/green?

@kyessenov
Copy link
Contributor Author

We want to gradually rollout config versions in the future (e.g. Mixer config). It should be possible apply a runtime config change without restarting application containers.

@ayj
Copy link
Contributor

ayj commented Jan 28, 2017

Perhaps it's worth considering application (i.e. service tag) and configuration (i.e. mixer) versioning independently first?

  • For application versioning, multiple deployments seems reasonable (necessary?) since a new pod/container needs to be created anyway.

  • For configuration versioning without pod restarts, incrementally adding additional labels to already created pods in a deployment might still work, but would require closely monitoring pod resources and re-labeling to account for lack of pod durability, scaling, etc. These configuration version labels would need to be added outside of the deployment spec otherwise, as you noted, the deployment controller will kill the pod. Alternatively, we maintain our own registry to track which pods belong to a specific configuration version, but that also requires tracking pod resources and duplicates state vs. maintaining the state in k8s itself via labels.

With regards to re-labeling pods, openshift/origin#11954 seems to be referring to changing the labels on the {Deployment,Replication}Controller and not necessarily on the pods themselves. I didn't see anything that discouraged adding additional labels to pods beyond what the controller expects, but maybe I missed something?

MarkRx commented on Nov 17, 2016 • edited
I have been able to change the labels on the running pods. If however I change the label on the dc / replication controller new pods continue to use the old label.

@kyessenov
Copy link
Contributor Author

It's possible to update labels on live pods. I think it might cause issues with some controllers that watch for the pod specs, and my impression was that it's not recommended and not well specified.
We can also use annotations instead of labels. They are more flexible and don't interfere with replica sets or deployments. Annotations are designed specifically for our purpose.
Manager will keep track of both labels and annotations on all pods, to associate service instances with service tags/versions.
The config ID can be inside the user-supplied spec, in case the pod version and config version need to be updated at the same time.
The real blocker is not having Mixer config or config ID yet. So we are not looking into this until we get better understanding of how Mixer configuration works.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants