New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ignore clusters with invalid kubeconfig #1956
Ignore clusters with invalid kubeconfig #1956
Conversation
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
If it's just invalid kubeconfig we're concerned with here can't we test it for validity before creating a cluster in the clusterStore? The Add cluster dialog could even filter out bad ones before building the context list (and show notification(s) indicating which contexts are bad). I never really liked the idea of carrying around dead clusters |
There should be some sort of notification to the user, otherwise we will have bug reports stating that lens removes their clusters. |
We have some validation when adding a cluster and this PR will improve that even. This "dead cluster" issue occurs when some process outside of lens app is modifying kubeconfig file for the cluster that is already added to Lens. |
Yes, that would be nice. Let me see what I can do for it. |
One option could be to send notifications from all dead clusters on start up? Is it possible to have some custom action on notification, for example remove the cluster completely? |
Yeah that is a good idea. "Ignore" and "Remove" maybe? |
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Notifications would benefit from #1941, so I propose that should be done in a future PR. |
Added unit tests for kubeconfig validation. I think this is now ready for the review, so please take a look. |
src/common/kube-helpers.ts
Outdated
* Validates Context, User and Cluster sructs in given kubeconfig. Additionally this will validate | ||
* the command passed to the exec substructure. | ||
*/ | ||
export function validateKubeConfig (config: KubeConfig, contextName: string, validationOpts?: { validateCluster?: boolean, validateUser?: boolean, validateExec?: boolean}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you extract the this out to ValidationOptions
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And provided a default {}
for validationOpts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
src/main/cluster.ts
Outdated
* | ||
* @observable | ||
*/ | ||
@observable isDead = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't need to be observable since it is only set in the constructor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed as public property now.
src/main/cluster.ts
Outdated
/** | ||
* Is cluster marked as dead, for example due the invalid kubeconfig | ||
* | ||
* @observable | ||
*/ | ||
@observable isDead = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And as I mentioned in my previous PR, I really think that adding this field is conflating what a cluster is.
ping - my lens corrupts daily and I need to remove applicationsupport/lens |
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
src/common/kube-helpers.ts
Outdated
* Validates Context, User and Cluster sructs in given kubeconfig. Additionally this will validate | ||
* the command passed to the exec substructure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Validates Context, User and Cluster sructs in given kubeconfig. Additionally this will validate | |
* the command passed to the exec substructure. | |
* Checks if `config` has valid `Context`, `User`, `Cluster`, and `exec` fields (if present when required) |
src/main/cluster.ts
Outdated
@@ -76,6 +76,7 @@ export class Cluster implements ClusterModel, ClusterState { | |||
* If extension sets this it needs to also mark cluster as enabled on activate (or when added to a store) | |||
*/ | |||
public ownerRef: string; | |||
public isDead = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't think that a cluster should be created at all if the kube config is invalid or non-existant.
I've created separate PR (#2233) to display a notification when cluster with invalid kubeconfig is detected. |
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
That notification looks good, but the trigger is still wrong IMO. It should be triggered from |
It's not "alive" cluster, Lens already ignores clusters that are not enabled. |
But why add it to the list then in the first place? |
If it's not in the list, it will get lost when persisting the data to cluster store. |
Then that implies that there should be another field for kubeconfigs that are no longer valid (since we validate them on the add step). |
IMHO this is still improvement to the current situation where Lens app is crashing if there are corrupted kubeconfigs associated. What is also good with this PR, it's not changing existing functionality too much. |
I agree that it is an improvement (but then again so was my PR (which I will no close)). But given that the imo better solution isn't that much extra work. And doesn't future complicate an already very complicated class (namely Yes it adds another field, that rarely if ever gets populated, to |
Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
I've now removed |
* Display warning notification if invalid kubeconfig detected Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely better though I still would like to see in the future removing that try { ... } catch { ... }
from the constructor and have these clusters be part of a separate list to keep the conceptual size of what a Cluster
instance "is" as small as necessary.
* Ignore clusters with invalid kubeconfig Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Improve error message Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Mark cluster as dead if kubeconfig loading fails Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Fix tests Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Validate cluster object in kubeconfig when constructing cluster Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Add unit tests for validateKubeConfig Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Refactor validateKubeconfig unit tests Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Extract ValidationOpts type Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Add default value to validationOpts param Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Change isDead to property Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Fix lint issues Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Add missing new line Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Update validateKubeConfig in-code documentation Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Remove isDead property Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com> * Display warning notification if invalid kubeconfig detected (#2233) * Display warning notification if invalid kubeconfig detected Signed-off-by: Lauri Nevala <lauri.nevala@gmail.com>
This PR will provide simplified version of #1453 to ignore clusters that have invalid/corrupted kubeconfig on cluster startup. With this PR those clusters are marked as dead and dead clusters are not enabled.
fixes #1316, fixes #1561