Skip to content
This repository has been archived by the owner on Jun 14, 2018. It is now read-only.

Naming services and service versions in Kubernetes #13

Closed
kyessenov opened this issue Dec 15, 2016 · 15 comments
Closed

Naming services and service versions in Kubernetes #13

kyessenov opened this issue Dec 15, 2016 · 15 comments

Comments

@kyessenov
Copy link
Contributor

kyessenov commented Dec 15, 2016

The routing rules (see #12) will refer to multiple versions of the same logical service.
If the logical service A has versions "v1" and "v2", we will need to represent both versions as Kubernetes service.

There are approaches:
(a) The natural approach is to identify Kubernetes service A with Istio service A. Now each pod in service A is identical and runs the same proxy container. The only way to differentiate between them is to register them in the Manager and assign pod IPs to versions "v1" and "v2" in the Manager. This approach does not work if the service container is different between "v1" and "v2".
CORRECTION: we can deal with this by binding version to the pod template in the deployment.

(b) The other approach is to instantiate two Kubernetes services "A-v1" and "A-v2". Then the name encodes both the logical service name and version. Due to restrictions in the naming scheme, "A-v1" is likely to be the choice here.

@rshriram any insights on this question?
@mjog how do we map "subjects" in the rules to the concrete service and pod name?

@mandarjog
Copy link
Contributor

For a better canary deployment, than what deployment controller does; we still follow the same process. So istio service == typical k8s app label

And we really don't care about the actual k8s service beyond its selector

So with that view, subject is the "app" and versions / tracks are considered parts of the same app.
Selectors can use labels to target policies to specific versions.

We do need to model AB tests and Canary deployment as first class scenarios.

Specifically how istio canary relates to canary using k8deployment controller.

@kyessenov
Copy link
Contributor Author

I think it's important to disambiguate that the version here refers to both the service backend version and the service config version. For the latter, it is important not to restart the service container to apply configuration stages. So it is up to the SDS component to assign versions to service endpoints.

@rshriram
Copy link
Member

rshriram commented Dec 15, 2016 via email

@kyessenov
Copy link
Contributor Author

kyessenov commented Dec 15, 2016

Yes, I tend to agree that the version should be a core primitive in the rules. This will unblock us to move on and allow the various use cases for staging, rollouts, testing to be expressed in terms of these rules.
More concretely, we all agree that Istio service is a Kubernetes service. Versions within the service can be specified with extra labels. I am hesitant to add explicit pod labels to signify versions since Deployments will remove them (we should wait until we have a hook into the pod constructor).
As for DNS names for service:version, let us delegate this role to SDS.

@rshriram
Copy link
Member

rshriram commented Dec 15, 2016 via email

@mandarjog
Copy link
Contributor

For any attributes that let us classify targets; such as pod.labels I think we are ok.
The usecase we have been talking about is

A ---> B
Now B would like to add authentication for traffic from A.

How do we gradually change.
So how do we say that 5% traffic from A is subject to auth when going to B.
(let us say that A --> B traffic has always been carrying auth headers, but they were never enforced)

In this case we do not have any way to sub-classify targets within (B). All pods serving B are fungible.
options.

  • Have a probabilistic rule that says apply auth to 5% traffic.
  • Place labels / annotations on pods where you want auth enabled.
    Now targets within B can be sub-classified.

@kyessenov
Copy link
Contributor Author

kyessenov commented Dec 15, 2016

@mandarjog if we employ SDS for this task, we can categorize B pods by changing the mapping:
(service name, version) -> (IP endpoint)
We'd have three policies:

  1. route A to B0 with no auth
  2. route A to B1 with auth
  3. route A to B with 95% to B0 and 5% to B1

@rshriram
Copy link
Member

rshriram commented Dec 15, 2016 via email

@kyessenov
Copy link
Contributor Author

Let's say that B service container is unchanged. Then it's enforced either at the Proxy (next to B) or at the Mixer. My example was referencing the Proxy way - Manager pushes two distinct configs to B1 and B2 and programs all proxies to respect rule (3). For the Mixer, rule (3) can be enforced at Mixer as long as the pods send their version (B1 or B2), meaning proxy has to be aware of which version it is assigned to. Either way, the pod-runs-which-service-version is known at the proxy.

@rshriram
Copy link
Member

rshriram commented Dec 15, 2016 via email

@kyessenov
Copy link
Contributor Author

@mandarjog @rshriram would it be possible to enforce the routing rule policy without the Mixer?
I understand there needs to be some runtime component to publish this information to the proxy mesh. We certainly need SDS, maybe CDS. Would that be enough? Can we decouple most routing rules from the Mixer at least?

@rshriram
Copy link
Member

rshriram commented Dec 16, 2016 via email

@kyessenov
Copy link
Contributor Author

OK, I just want to make sure we can run percentage-base rule (see (3) above) without the Mixer.
Ideally, we should be able to run Manager + Proxy and get some routing rules functionality.
I brought in Mixer only to justify the need not to restart pods/containers.

@mandarjog
Copy link
Contributor

After some more thought, I think that any non deterministic (percentage based) behaviour has to be confined to the proxy. An important reason is that caching results of a non-deterministic function can have unexpected results. Since proxy is going to cache responses from mixer, they must be deterministic.

In the A --> B auth example we have been talking about. It is the job of the proxy to inject an attribute / header on the request so that downstream mixer can use this attribute in the selector, and deterministically apply auth or no auth.

@rshriram
Copy link
Member

rshriram commented Feb 2, 2017

Also been resolved in design decision.. First cut in the ProxyConfig proto.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants