-
Notifications
You must be signed in to change notification settings - Fork 181
Proposal: e2e SSL #557
Comments
On your last point I'm not sure I agree. If someone has paid for a wildcard cert they might not want to pay for individual certs for each application too. Seems kind of redundant |
|
@jchauncey nice diagram. |
I should have been more clear. This shouldn't happen automatically (for security reasons), but if you, as a developer or app operator legitimately do have access to the same wildcard cert that the cluster operator used as the platform cert, there should be nothing that prevents you from uploading it using |
Not sure if that helps at all but I just found out about linkerd which can be used to achieve e2e SSL between k8s services: https://blog.buoyant.io/2016/10/24/a-service-mesh-for-kubernetes-part-iii-encrypting-all-the-things/ |
This issue was moved to teamhephy/workflow#49 |
This issue supersedes deis/controller#355 and attempts to distill its most salient suggestions into something more digestible.
One persistent shortcoming of Workflow is that it does not easily accommodate applications requiring e2e SSL. Currently, SSL is typically terminated at the router(s). (It can be terminated at the load balancer, but this is uncommon.) The fact that unencrypted traffic flows between the router(s) and application pods (which may reside on other nodes) precludes the use of Workflow as a platform for any applications subject to stringent regulatory requirements like HIPAA or PCI-DSS that mandate the encryption of all over-the-wire transmissions.
The most efficient way to solve this is to bypass the router and deliver encrypted TCP packets directly to applications pods. Most application frameworks, however, do not know how to terminate SSL. This is, and should remain, chiefly a platform concern. So the problems to solve for are:
Proposed implementation
Bypassing the router
Bypassing the router is easy. Currently the router routes traffic to all "routable services." By not annotating an application's k8s service as routable, the router will cease to route traffic to it.
Traffic can be router directly to application pods by making its k8s service one of
type: LoadBalancer
.Terminating SSL at the pod
Pods can host multiple containers-- and these are able to communicate with one another over the local interface. I propose that wherever e2e SSL is required, a dedicated router can be installed as a "sidecar" container in each application pod. Such router instances would run in a new "terminator mode," wherein the router retains all of its usual configurability and flexibility, but ceases to route traffic for all applications (routable services). Instead, it becomes concerned only with terminating SSL and proxying requests to a single upstream (the application) over the local interface.
This is relatively easy to implement.
Coordinating it all
I propose the
deis
CLI and controller be enhanced to allow applications to opt-in to e2e SSL. This would require the controller to modify an application's service definition to not be "routable", to be oftype: LoadBalancer
, and deployments to include the terminator sidecar (which must also include any applicable certificates).One gotcha
Above, I stated that the terminator sidecar "must also include any applicable certificates." I believe this must explicitly exclude the platform certificate (which is always a wildcard) and should be limited only to certificates owned by / associated to the application. Otherwise, the private key for a certificate not owned by the application in question (e.g. the platform certificate) could be exposed to developers who should not have access to it.
The text was updated successfully, but these errors were encountered: