Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce deleteOption for GracefulEvictionTask #3430

Closed
Poor12 opened this issue Apr 19, 2023 · 6 comments · Fixed by #3437
Closed

Introduce deleteOption for GracefulEvictionTask #3430

Poor12 opened this issue Apr 19, 2023 · 6 comments · Fixed by #3437
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@Poor12
Copy link
Member

Poor12 commented Apr 19, 2023

What would you like to be added:
For application migration, we provide three ways, one is Never which means that Karmada will not delete the application in the failed clusters. But for the current GracefulEvictionTask, we cannot distinguish that GracefulEvictionTask should be kept or deleted when timeout is reached. In that case, we may need an flag to mark it.

Why is this needed:
Distinguish that GracefulEvictionTask should be kept or deleted when timeout is reached.

@Poor12 Poor12 added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 19, 2023
@zishen
Copy link
Member

zishen commented Apr 19, 2023

Before answering and looking into this case, I might need to ask some questions to understand this.

1.How long does this failed cluster last?
1.Can the value of unschedulable-threshold be set higher, until the failed cluster recovers?

@Poor12
Copy link
Member Author

Poor12 commented Apr 19, 2023

Another way is use Producer to judge it. For example, if a GracefulEvictionTask's producer is from taintManager, we may judge it from Failover.Cluster.PurgeMode and if a GracefulEvictionTask's producer is from applicationFailoverController, we may judge it from Failover.Application.PurgeMode.
I prefer to the former one.
/cc @RainbowMango for opinions.

@Poor12
Copy link
Member Author

Poor12 commented Apr 19, 2023

Before answering and looking into this case, I might need to ask some questions to understand this.

1.How long does this failed cluster last? 1.Can the value of unschedulable-threshold be set higher, until the failed cluster recovers?

Before answering the question, I'd like to recommend you some documents about multi-cluster failover. You can get some inspiration. :)

@zishen
Copy link
Member

zishen commented Apr 19, 2023

Before answering and looking into this case, I might need to ask some questions to understand this.
1.How long does this failed cluster last? 1.Can the value of unschedulable-threshold be set higher, until the failed cluster recovers?

Before answering the question, I'd like to recommend you some documents about multi-cluster failover. You can get some inspiration. :)

sorry, my description may not be accurate. But I did know the failover.
I had got the k8s will delete the fault pod when node failed in 1.26. You can see here Kubernetes 1.26: Non-Graceful Node Shutdown Moves to Beta
So, I want to know why need keep the application in the failed clusters, and how to do it.

@Poor12
Copy link
Member Author

Poor12 commented Apr 20, 2023

For multi-cluster failover, some users may hope that application can remain on the failed cluster, and after the failure is restored, it is up to the user to decide how to deal with the redundant copy. In this case, we may want to keep the GracefulEvictionTask.

@zishen
Copy link
Member

zishen commented Apr 20, 2023

For multi-cluster failover, some users may hope that application can remain on the failed cluster, and after the failure is restored, it is up to the user to decide how to deal with the redundant copy. In this case, we may want to keep the GracefulEvictionTask.

ok,I got it. It looks good.
Thanks for your patience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants