Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow enterprise gateway to launch k8s kernels on remote clusters #1235

Open
Shrinjay opened this issue Jan 13, 2023 · 7 comments
Open

Allow enterprise gateway to launch k8s kernels on remote clusters #1235

Shrinjay opened this issue Jan 13, 2023 · 7 comments

Comments

@Shrinjay
Copy link

Shrinjay commented Jan 13, 2023

Problem

  • Currently enterprise gateway launches kernels on the cluster where it is currently running.
  • This poses limitations in cases where we want the kernel to have access to resources on a remote cluster without running the enterprise gateway on that cluster specifically.
  • While the k8s client supports connecting to and launching/managing resources on a remote cluster, this feature just isn't implemented in the current client.

Proposed Solution

  • The k8s client provides us a simple example of configuring for remote cluster access
  • To enable this in our client, we want to be able to pass in parameters for remote kernel access to a client and have it reuse that configuration in every instance.

Therefore the code changes are as follows

  • Wrap the k8s client in a python class that configures using the appropriate environment variables for remote or local cluster access
  • Export this client atomically so only one instance is used in the codebase
  • Replace current usages of the k8s client in the k8s proxy, custom resource proxy and launchers with our wrapped client

This also has an added benefit of replacing the use of static clients with an atomic export, so configurations are consistent.

Willing to start working on this if there's no opposition!

@welcome
Copy link

welcome bot commented Jan 13, 2023

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@kevin-bates
Copy link
Member

This sounds really interesting. So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched - these changes simply enable the ability for that cluster to be remote. Is that correct?

You'll also need to take into account the possibility of the user having configured the EG_SHARED_NAMESPACE functionality. In that case, the remote cluster may not have the enterprise-gateway namespace and I would argue that we should probably raise an error if remote-k8s-cluster AND eg-shared-namespace are both enabled. The BYO namespace (using KERNEL_NAMESPACE) should still work, although we should probably ensure that KERNEL_NAMESPACE does not reflect the EG namespace in which EG resides.

At any rate, those are implementation details that we can work out. We look forward to your pull request.

(Just a note that changes within the Process Proxies functionality, will be dropped in our EG 4.0 release in favor of using the kernel provisioner framework that is now part of the general Jupyter stack, so it would be great if you can participate in the Gateway Provisioners project. I'm (frantically) trying to get things in place for a release over there as we speak. If you're unable to make applicable changes, no worries, we'll port them over at some point closer to making the switch. I just wanted to make you aware of that future transition.)

@Shrinjay
Copy link
Author

So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched - these changes simply enable the ability for that cluster to be remote. Is that correct?

Yup! That would be exactly correct.

Thanks for reminding me about the shared namespace option! That's something I forgot to consider, I'll also look through the other k8s options to ensure we don't raise a conflict.

In regards to the new gateway provisioners, thanks for making me aware of that as well. I'll take a look there for the effort to port over the new k8s client pattern w remote cluster and let you know.

@kevin-bates
Copy link
Member

I'll take a look there for the effort to port over the new k8s client pattern w remote cluster and let you know.

You should find things nearly identical with the exception of name changes and the fact that there isn't a hosting application in the repo.

I need to apologize for the existing documentation (still converting from EG), tests (that are nearly zero), and lack of an installable package (hopefully soon), so bear with us.

@echarles
Copy link
Member

echarles commented Apr 9, 2023

I need to apologize for the existing documentation (still converting from EG), tests (that are nearly zero), and lack of an installable package (hopefully soon), so bear with us.

Is there an issue or PR that tracks the effort to let EG use the Provisioners?

@kevin-bates
Copy link
Member

Yes - you opened #1208. 😄

@sa-
Copy link

sa- commented Jan 11, 2024

So a given EG server would still target exactly one Kubernetes cluster in which the kernels are launched

It would be useful if one could also point to a specific kube context for a given kernel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants