Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karmada not propagate resources to new member cluster #2261

Closed
liuchintao opened this issue Jul 26, 2022 · 6 comments · Fixed by #2301
Closed

Karmada not propagate resources to new member cluster #2261

liuchintao opened this issue Jul 26, 2022 · 6 comments · Fixed by #2301
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@liuchintao
Copy link

What happened:

I just want to propagate my deployment to all member clusters automatically, when they join.

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx
spec:
  resourceSelectors:
  - apiVersion: apps/v1
    kind: Deployment
    name: nginx

After creating this PP, I find nginx deployment in cluster-1 and cluster-2.

But when I join a new member cluster-3, I find nginx deployment is not propagated to cluster-3

What you expected to happen:

Deployment should be propagated to new member cluster.

type Placement struct {
	// ClusterAffinity represents scheduling restrictions to a certain set of clusters.
	// If not set, any cluster can be scheduling candidate.
	// +optional
	ClusterAffinity *ClusterAffinity `json:"clusterAffinity,omitempty"`
}

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Karmada version:
  • kubectl-karmada or karmadactl version (the result of kubectl-karmada version or karmadactl version):
  • Others:
@liuchintao liuchintao added the kind/bug Categorizes issue or PR as related to a bug. label Jul 26, 2022
@RainbowMango
Copy link
Member

I just want to propagate my deployment to all member clusters automatically, when they join.

Yes, I think this is a reasonable use case. But now the scheduler doesn't re-schedule in case of cluster joining or removal.

@dddddai I can see there is TODO, do you mean to cover this case?

// TODO(dddddai): reschedule bindings on cluster change

@dddddai
Copy link
Member

dddddai commented Jul 27, 2022

Yeah I was thinking so, there were similar issues
#1644
#829 (comment)

I think the descheduler should be responsible for this case?

@RainbowMango
Copy link
Member

I think the descheduler should be responsible for this case?

cc @Garrybest

I'll pay attention to this feature. @dddddai can you help to lead the effort?

@Garrybest
Copy link
Member

Descheduler's duty is to evict replicas only. This issue focuses on scheduling a object to new joined clusters.

I think we could always s.scheduleResourceBinding(rb) when the placement is Duplicated. It is ok?

@liuchintao
Copy link
Author

hello, is there anything new?

@RainbowMango
Copy link
Member

@liuchintao we are working on it. @chaunceyjiang send a PR(#2301) for this.
/assign @chaunceyjiang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants