-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduler: support the ability to automatically assign replicas evenly #4805
Comments
If the weights are set to the same, I understand that's the effect. I understand that sometimes the number of replicas is not divisible by the number of clusters. In this case, there must be some clusters with one more replica. |
For general scenarios, we can only achieve the maximum approximate average assignment. This is an unchangeable fact. |
How about describing it in detail at a community meeting? |
Given the plausibility of this feature, and the fact that the difficulty of implementing it is not very complicated, how about we do this requirement as an OSPP project @RainbowMango @whitewindmills |
if user specified it the strategy, will it ignore the result of |
@Vacant2333 |
hello, i wonder know that when will be different with when we use (( thanks for your answer @whitewindmills apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
spec:
#...
placement:
replicaScheduling:
replicaDivisionPreference: Weighted
replicaSchedulingType: Divided
weightPreference:
staticWeightList:
- targetCluster:
clusterNames:
- member1
weight: 1
- targetCluster:
clusterNames:
- member2
weight: 1 |
@Vacant2333
Hope it helps you. |
@whitewindmills i got it, if this feat is not add to OSPP, i would like to implement it~~ im watch on karmada-scheduler for now |
Hi @Vacant2333 We are going to add this task to the OSPP 2024. You can join in the discussion and review. |
/assign |
What would you like to be added:
Background
We want to introduce a new replica assignment strategy in the scheduler, which supports an even assignment of the target replicas across the currently selected clusters.
Explanation
After going through the filtering, prioritization, and selection phases, three clusters(
member1
,member2
,member3
) were selected. We will automatically assign 9 replicas equally among these three clusters, the result we expect is[{member1: 3}, {member2: 3}, {member3: 3}]
.Why is this needed:
User Story
As a developer, we have a deployment with 2 replicas that needs to be deployed with high availability across AZs. We hope Karmada can schedule it to two AZs and ensure that there is a replica on each AZ.
![2AZ](https://private-user-images.githubusercontent.com/89241565/320258327-982ace7d-449e-4583-891e-08095bb44c1c.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjExNDYyNzgsIm5iZiI6MTcyMTE0NTk3OCwicGF0aCI6Ii84OTI0MTU2NS8zMjAyNTgzMjctOTgyYWNlN2QtNDQ5ZS00NTgzLTg5MWUtMDgwOTViYjQ0YzFjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE2VDE2MDYxOFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI1YmFkMTU5MzU1NzQ1ODMyODhiYjdjNDVhMmE5NzE4ZmE1NzAyY2M4NTQ4MjU1NDgwMGFkMzRlMTQ3NjI0NDkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.dttb5CXn1N5xHFewnDE9GJ6vvgHv5nSqty6d3z6q72k)
Our PropagationPolicy might look like this:
But unfortunately, the strategy
AvailableReplicas
does not guarantee that our replicas are evenly assigned.Any ideas?
We can introduce a new replica assignment strategy like
AvailableReplicas
, maybe we can name itAverageReplicas
. It is essentially different from static weight assignment, because it does not support spread constraints and is mandatory. When assigning replicas, it does not consider whether the cluster can place so many replicas.The text was updated successfully, but these errors were encountered: