-
Notifications
You must be signed in to change notification settings - Fork 91
Attaching service tags (versions) to network endpoints #33
Comments
Can you elaborate a bit more on the first drawback ? There is a
nomenclature gap here that I would like to bridge, so as to understand the
problem fully.
cc @elevran @zcahana @zachidan
On Mon, Jan 9, 2017 at 9:13 PM Kuat ***@***.***> wrote:
Kubernetes API provides two basic discovery methods: get all methods and
get all network endpoints. For Istio, we introduce the concept of a service
tag (used to call versions) to provide finer-grained routing in Envoy (e.g.
partitioning service endpoints between A/B for A/B testing).
How do we represent this tag information in the Kubernetes API?
Amalgam8 uses pod labels and performs an inverse lookup from endpoint IP
to pod spec. This is a fine approach, but I'm wondering if that's
sufficient. Using pod labels has drawbacks:
- requires deployments to be aware of these labels or pod labels get
overwritten
- not clear if we can change pod tags dynamically without restarting
the pod (and not causing replicasets or whatever created the pod to kill it)
- all ports for the same network endpoint carry the same tags
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#33>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AH0qdxMBpuMn9Eqltjyw3PvjhEetM_Jhks5rQukqgaJpZM4Le9IF>
.
--
~shriram
|
This is what I mean by 1): |
We are going with pod label selectors as the source of truth for the service tag information. There are two items left to work out:
|
Forgive me if this was already resolved elsewhere, but if nothing else I wanted to follow-up for my own understanding. Doesn't A/B, canary, blue/green, etc. imply the creation of a new pod with new image? Why wouldn't this pods be created with the new version label vs. modifying existing labels. The two deployment method seems to be a well established pattern as described by canary-deployments. Any reason this pattern wouldn't work for A/B, blue/green? |
We want to gradually rollout config versions in the future (e.g. Mixer config). It should be possible apply a runtime config change without restarting application containers. |
Perhaps it's worth considering application (i.e. service tag) and configuration (i.e. mixer) versioning independently first?
With regards to re-labeling pods, openshift/origin#11954 seems to be referring to changing the labels on the {Deployment,Replication}Controller and not necessarily on the pods themselves. I didn't see anything that discouraged adding additional labels to pods beyond what the controller expects, but maybe I missed something?
|
It's possible to update labels on live pods. I think it might cause issues with some controllers that watch for the pod specs, and my impression was that it's not recommended and not well specified. |
Kubernetes API provides two basic discovery methods: get all methods and get all network endpoints. For Istio, we introduce the concept of a service tag (used to call versions) to provide finer-grained routing in Envoy (e.g. partitioning service endpoints between A/B for A/B testing).
How do we represent this tag information in the Kubernetes API?
Amalgam8 uses pod labels and performs an inverse lookup from endpoint IP to pod spec. This is a fine approach, but I'm wondering if that's sufficient. Using pod labels has drawbacks:
In the future, we want to use service tags to implement dynamic config update without pod restarts. So perhaps, an approach where we use both pod labels as well as some other registry would be the actual solution.
The text was updated successfully, but these errors were encountered: