-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm chart: Add label to be allowed direct network access to the jupyterhub pod #352
Conversation
This LGTM! |
This fix seems fine to me, and I don't think there should be any downside to adding that label even if jupyterhub isn't used. But now I'm wondering - should we be doing the jupyterhub auth requests through the proxy -> hub instead of by directly contacting the hub pod? |
@jcrist its a sound idea to go through the proxy, but sadly, the proxy isn't reached by a single k8s service independent by the z2jh chart configuration, it can be either proxy-http (if automatic TLS acquisition is enabled), or through proxy-public (if automatic TLS acquisition isn't enabled - no autohttps pod). Also, the z2jh Helm chart may involve a release name within its k8s service names as well, which makes the automatic detection capable to fail. My suggestion is to expose a configurable url, and leave it to the user to declare where the hub is. That would also allow dask-gateway to integrate with a jupyterhub outside the namespace for example. |
We already support that as Given that, is the fix above the best path forward? Makes automatic-support work with the recent jhub release, and there's already a manual config option for those that need it. |
@jcrist ah nice then I think this is fine! In a way it would be nice to go through the proxy the fallback is to use apiUrl to set whatever URL that will work, but... if you do that, you need to look for DaskHub which integrates z2jh with dask-gateway have python logic to check for proxy-http and falls back to proxy-public. I think the solution is to always have proxy-http around instead, buts its an issue for z2jh to fix rather than in dask-gateway. |
Sounds good, merging then. Thanks! |
Do you think it's possible to either make a 0.9.1 release of the helm chart with this change, or publish helm charts for each commit with chartpress as we do for z2jh? Currently blocked on this to setting up new JupyterHubs, since in many places you can't actually turn off network policies. |
Sure. I'm a bit swamped today, but can hopefully kick off a release tonight or sometime tomorrow. It'd be good to get this automated (see #296), but first we need to transition off of travis ci. |
Thanks, @jcrist! |
Thanks for taking a look at this all! FWIW, we ran into this issue and were using the following adjustments to the
But this was just hacked together based on what I could understand from z2jh docs and I think it makes more sense to add labels to Dask-gateway pods (like proposed here) rather than allow more labels to be selected on the jupyterhub end. I also am not sure if adding this label selector to all three jupyterhub pods was overkill or not, but it does seem to work as a temporary situation for new jupyterhubs if needed before the next Dask-gateway release. |
Thanks for the detailed config, @bolliger32. Since I restrict outbound network access on my singleuser pods to only 80, 443, and 53, I needed the following extra config: singleuser:
extraLabels:
hub.jupyter.org/network-access-proxy-http: "true"
networkPolicy:
# Needed for talking to the proxy pod
egress:
- ports:
- port: 8000
protocol: TCP
- ports:
- port: 80
protocol: TCP
- ports:
- port: 443
protocol: TCP Thanks to @consideRatio for helping me debug this :) @jcrist do you think you'll be able to do a release anytime soon? :D |
ah - interesting. thanks @yuvipanda ! Is it easy enough to explain why the singleuser pods need to talk to the proxy? I'm asking b/c we are seeing some issues where blocked network traffic seems to kill gateway clusters. We get a bunch of errors in the traefik pod logs that look like: |
In a hub setup like this, your desk gateway client needs to talk to the dask-gateway hub service via the proxy, instead of directly. This is why the singleuser server needs to talk to the proxy. It fails when trying to first create a cluster, not later on. So maybe your issue is different? |
You're right! it was indeed a totally different issue: #363 |
This PR adds a label that grants the dask-gateway api-server access to speak with the
hub
pod in the JupyterHub Helm chart which is required if the JupyterHub Helm chart's network policy are enforced, as they are by default in the latest version. This is confirmed to resolve the issue identified in dask/helm-chart#142.The coupling that exists between the Helm charts come from configuring to use
jupyterhub
auth.