Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for "hns get pods -n my-hierarchical-ns"? #235

Closed
absoludity opened this issue Nov 2, 2022 · 12 comments · Fixed by #281
Closed

Support for "hns get pods -n my-hierarchical-ns"? #235

absoludity opened this issue Nov 2, 2022 · 12 comments · Fixed by #281
Assignees

Comments

@absoludity
Copy link

absoludity commented Nov 2, 2022

Hi there. I was excited to find hns today as a potential solution for some multi-tenancy queries users have which cannot be easily solved with vanilla k8s. There are two related requests that pop up in different scenarios for our application (Kubeapps):

  • Can I see just the namespaces to which I have access? We currently solve this with a special service account that can list namespaces and then check to which namespaces a user has access (not great and expensive).
  • Can I see all resource-x's in the namespaces to which I have access... effectively kubectl hns get pods -n my-hierarchical-ns

I can see ways that I could use hns to potentially answer the first point above, though I also note that the result may be slow for the same reason. It seems that currently this is answered client-side (by querying for the anchors in a namespace, then for those namespaces, recursively?).

So my questions are:

  1. Are there plans, or has there been other requests, to move the hierarchical query server-side so that a single query can return the available namespaces in the tree? (I think you answered this with a "no" only 22 days ago on kubectl hns tree command very slow #223 :) ), and
  2. If (1) was a possibility, could that be easily generalised to answer queries for resources within a hierarchy (eg. kubectl hns get pods -n myhierarchical-ns) on the server-side, rather than recursively visiting all namespaces and calling get pods etc. (assuming a user without read permission on the entire cluster).

I'm not certain, but it looks like doing so could not only solve the slow responses for hns tree but also allow users in a multi-tenant environment to see all their resources (similar to kubectl get pods -A but for a hierarchy rather than the whole cluster), which I imagine would be very useful to others as well?

Thanks for any thoughts (and for the great work on hns - I was very excited to find it... been a missing piece in k8s for a long time that Rancher and openshift have tried to fill in the past, imo).

@adrianludwin
Copy link
Contributor

Hey @absoludity ! I'd love to be able to limit visibility to the namespaces you have access to, but base K8s doesn't support that and it'd be challenging for HNC to work around that. If we ever had to, I was planning on using something roughly similar to what you're describing, I think OpenShift has a similar approach too. Sadly, since HNC is just a layer on top of vanilla K8s, there's only so much we can do to modify the behaviour of built-in verbs like LIST.

So we definitely could build something like this to move everything server-side, probably as an API extension rather than a controller. I'm not going to work on that myself, but I could help you with design, scoping, and reviews if you wanted to work on it. Sorry this isn't the best answer, I'm totally not objecting to this feature, it just takes more time than I have right now!

@absoludity
Copy link
Author

Thanks @adrianludwin, yes, I was just more wondering if it was something planned for the future. Unfortunately it's not something I'd be able to take time to do right now either. Cheers.

@adrianludwin
Copy link
Contributor

adrianludwin commented Nov 14, 2022 via email

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 14, 2023
@cmurphy
Copy link
Contributor

cmurphy commented Mar 15, 2023

We discussed this at the February 21 meeting, I drafted a proposal to address this here https://docs.google.com/document/d/1WpnAJ3442v93G4Wi7SnoEiLZVLJJO-AWCkl5GO4YLl4/edit?usp=sharing

@cmurphy
Copy link
Contributor

cmurphy commented Mar 15, 2023

/assign
/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2023
@cmurphy
Copy link
Contributor

cmurphy commented Jun 14, 2023

/remove-lifecycle stale

This is under development #281

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 21, 2024
@cmurphy
Copy link
Contributor

cmurphy commented Feb 27, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants