Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

馃殌 Feature: Higher level opinionated view for Kubernetes plugin #18648

Open
2 tasks done
mclarke47 opened this issue Jul 11, 2023 · 4 comments
Open
2 tasks done

馃殌 Feature: Higher level opinionated view for Kubernetes plugin #18648

mclarke47 opened this issue Jul 11, 2023 · 4 comments
Assignees
Labels
area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. enhancement New feature or request will-fix We will fix this at some point

Comments

@mclarke47
Copy link
Collaborator

mclarke47 commented Jul 11, 2023

馃敄 Feature description

We should introduce a higher level view for the Kubernetes plugins that is:

  • Holistic
  • Easy to scan for errors
  • More visual, i.e. less reading and paging through data tables
  • integrates directly with the error reporting functionality

Something like this:
Screenshot 2023-07-11 at 3 47 43 PM

The user should be able to group the resources on screen various ways, for example:

  • by some common label (environment/region/owner etc...)
  • by cluster
Screenshot 2023-07-11 at 3 56 14 PM

馃帳 Context

The existing Kubernetes plugin works ok to show a basic k8s configuration; however, it can get much more complicated when multi-cluster deployments are shown. The user ends up maximizing and minimizing accordions to see resources across clusters.

Screenshot 2023-07-11 at 3 47 27 PM

鉁岋笍 Possible Implementation

We can start with an opt in switch at the top of the Kubernetes plugin showing the existing view by default. Then add functionality to include the higher level view.

We can start with pods, then incorporate:

  • cronjobs
  • scaling
  • service discovery

馃憖 Have you spent some time to check if this feature request has been raised before?

  • I checked and didn't find similar issue

馃彚 Have you read the Code of Conduct?

Are you willing to submit PR?

Yes I am willing to submit a PR!

@mclarke47 mclarke47 added enhancement New feature or request area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. labels Jul 11, 2023
@mclarke47 mclarke47 self-assigned this Jul 11, 2023
@mclarke47
Copy link
Collaborator Author

Also very interested in feedback from plugin users and what they would like to see here!

@adamdmharvey
Copy link
Member

adamdmharvey commented Jul 13, 2023

Love the idea; but no specific feedback points of suggestions re: the visual. Could build upon work done in the resource graph part of Backstage which maps entities, but maybe do the same sort of mapping of clusters, deployments, etc.

One thing that might be super helpful for our teams is elevating info about deprecations. As we keep our k8s clusters up to date, we've got various approaches - some which work well, others which could use improvements - of helping our teams keep up on api version changes and deprecations. Elevating summary info per entity on what deprecations are being encountered so teams could be proactive on it might be super useful!

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Sep 11, 2023
@mclarke47 mclarke47 added will-fix We will fix this at some point and removed stale labels Sep 11, 2023
@rydenius
Copy link

One specific aspect that is important to me and my team is that there is super clear distinction between different levels of "errors". The fact that a pod is not yet healthy when it has not yet reached the initialDelaySeconds + periodSeconds * failureThreshold of the startup probe is not an error at all (hourglass icon seems appropriate). A pod that was successfully restarted because of failed probes is for sure an error, but one that the system could fix on its own, so not too bad. A crash looping pod is the worst type. Getting the orange warning triangle during a regular deploy (no single crash, just not done yet) makes the adrenaline rush and my team jump to the conclusion that something is very wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:kubernetes Related to the Kubernetes Project Area - not deploying Backstage with k8s. enhancement New feature or request will-fix We will fix this at some point
Projects
None yet
Development

No branches or pull requests

3 participants