New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for running with istio using mTLS #425
base: main
Are you sure you want to change the base?
Conversation
@harsimranmaan there is conflict that need to be resolved |
@yuvipanda can you please review this pr? I'm waiting for this change, to be honest. :) |
Thank you for working on it, @harsimranmaan! Excited to get this to land 👍 Have you seen the code in https://github.com/jupyterhub/kubespawner/blob/master/kubespawner/proxy.py#L45? Many moons ago, I set up a Proxy purely based on ingresses. That too needed services. But instead of using labels, I optimized by creating EndPoint objects directly - this saves a few loops from the k8s service controller, but maybe not worth it. I don't think that proxy class has seen a lot of work since, so I don't mind this becoming an option in the spawner directly. I'm guessing we'll have one service per user server. Can we put a label like 'hub.jupyter.org/server-name' or some such and use that for service selection? This way the label is more generic, and can be arbitrarily used by others too (for networkpolicy, for example?). I haven't managed to go through or test the code yet though :( Not sure when I'll have time for that, but happy to engage at a slightly higher level for now. |
Thanks @yuvipanda. It makes sense. I'll rebase and refactor it. I'll also look at the endpoint object to see if something can be recycled but that might come in as a separate PR. |
@harsimranmaan thanks a lot. I'm able to enable istio by following your article. One thing I'm still having is that @yuvipanda can this pr be merged so that I can get it easier or is there way to patch kubespawner? Thanks! |
@harsimranmaan fyi, there is conflict to be resolved |
Thirding this - have encountered the same issue with Jupyterhub not playing nice with Istio and this fixes a big part of that, and seems self-contained. |
Sorry, I was away in the last few weeks. Will rebase this. |
I rebased this. I haven't had a chance yet to test the change with the async/await and the timeout changes that landed between this PR and master in the meantime. This can be reviewed but I'll report once I have some data after upgrading my the JH clusters to the rebased version. |
@yuvipanda @harsimranmaan can this be merged? |
Validated the changes for create and delete user pods.
|
@harsimranmaan @yuvipanda we're also waiting for this to be resolved if it's possible to pick this PR back up again. |
@harsimranmaan @yuvipanda we could really use this fix as well. |
I'll try to rebase this later in the week. If someone else is interested to do so, I'd be happy to merge rebased code into my branch. |
@harsimranmaan @yuvipanda thank you for your work on this. Any chance you can resolve the PR conflicts? |
Looks good to me. For our particular deployment we need to be able to configure the V1ServicePort 'name' attribute in the make_service function added in #409. So we may still need to extend KubeSpawner unless that can be easily added in a different PR. |
Addresses jupyterhub/zero-to-jupyterhub-k8s#697
When istio sidecar injection is enabled in a namespace running jupyterhub, different components fail to talk to each other. Introducing a service to front the user pod helps solve this problem. Running other JH components in a cluster with istio enabled can be troublesome as the proxy can fail due to the service mesh network setup. (In my tests, the configurable-http-proxy did not work and the traefik based one could not forward client info properly). https://github.com/splunk/jupyterhub-istio-proxy can be used as the proxy that configures istio based routing for user pods.
The overall setup:
The overall communication would now be:
Web traffic -> istio gateway(TLS termination) -> istio Virtual service (created by proxy-api, responsible for routing traffic) -> user's pod --mTLS--> JH
JH --mTLS--> user's pod(via user svc).
A separate PR would be added on the zero2jhk8s project to add the helm chart changes to support jupyterhub-istio-proxy
More more info see: https://medium.com/@harsimran.maan/running-jupyterhub-with-istio-service-mesh-on-kubernetes-a-troubleshooting-journey-707039f36a7b