Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support PodDisruptionBudget resource in default interpreter #2997

Merged
merged 1 commit into from
Jan 9, 2023

Conversation

a7i
Copy link
Contributor

@a7i a7i commented Dec 27, 2022

Signed-off-by: Amir Alavi amiralavi7@gmail.com

What type of PR is this?

What this PR does / why we need it:
/kind feature
Which issue(s) this PR fixes:
Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

support PodDisruptionBudget resource in default interpreter

@karmada-bot karmada-bot added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 27, 2022
@karmada-bot karmada-bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Dec 27, 2022
@karmada-bot karmada-bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Dec 27, 2022
@karmada-bot karmada-bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 27, 2022
@codecov-commenter
Copy link

Codecov Report

Merging #2997 (1eb8738) into master (c4428d6) will increase coverage by 0.09%.
The diff coverage is 65.75%.

@@            Coverage Diff             @@
##           master    #2997      +/-   ##
==========================================
+ Coverage   38.58%   38.68%   +0.09%     
==========================================
  Files         206      206              
  Lines       18820    18865      +45     
==========================================
+ Hits         7261     7297      +36     
- Misses      11129    11134       +5     
- Partials      430      434       +4     
Flag Coverage Δ
unittests 38.68% <65.75%> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
.../resourceinterpreter/defaultinterpreter/healthy.go 65.76% <50.00%> (-1.23%) ⬇️
...einterpreter/defaultinterpreter/aggregatestatus.go 71.20% <55.26%> (-1.73%) ⬇️
...rceinterpreter/defaultinterpreter/reflectstatus.go 18.47% <85.18%> (+11.47%) ⬆️
pkg/scheduler/core/division_algorithm.go 85.71% <0.00%> (-0.38%) ⬇️
pkg/scheduler/core/assignment.go 79.76% <0.00%> (ø)
...scheduler/core/spreadconstraint/select_clusters.go 86.04% <0.00%> (+0.33%) ⬆️
pkg/search/proxy/store/util.go 93.83% <0.00%> (+0.47%) ⬆️
.../scheduler/core/spreadconstraint/group_clusters.go 95.20% <0.00%> (+2.01%) ⬆️
pkg/util/binding.go 100.00% <0.00%> (+3.89%) ⬆️
pkg/util/helper/binding.go 72.33% <0.00%> (+7.76%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@jwcesign
Copy link
Member

Thanks for your PR @a7i , I have one question, for pdb, there is spec.minAvailable like follows:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  minAvailable: 10
  selector:
    matchLabels:
      app: zookeeper

So this pdp will be created in all member clusters with minAvailable: 10? or will be split into different numbers for different member clusters?

@RainbowMango
Copy link
Member

Hi @a7i, thanks for doing this.
Can you help to share your use case here?

@a7i
Copy link
Contributor Author

a7i commented Dec 29, 2022

Thanks for your PR @a7i , I have one question, for pdb, there is spec.minAvailable like follows:

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  minAvailable: 10
  selector:
    matchLabels:
      app: zookeeper

So this pdp will be created in all member clusters with minAvailable: 10? or will be split into different numbers for different member clusters?

@jwcesign As far as I know, that is how it is today. With or without this change. We are doing more testing on PDBs this week and I can circle back and let you know how it works with different propagation policies.

@a7i
Copy link
Contributor Author

a7i commented Dec 29, 2022

Hi @a7i, thanks for doing this. Can you help to share your use case here?

@RainbowMango We are conducting experiments on how Karmada treats PDBs. First blocker was that PDB status is not aggregated. A kubectl get pdb .. will yield missing or N/A results today.

@chaunceyjiang
Copy link
Member

chaunceyjiang commented Dec 29, 2022

So this pdp will be created in all member clusters with minAvailable: 10? or will be split into different numbers for different member clusters?

At present, kamrada can only create this pdp(minAvailable: 10) in all member clusters, right? @jwcesign

@jwcesign
Copy link
Member

At present, kamrada can only create this pdp(minAvailable: 10) in all member clusters, right? @jwcesign

I think yes, if want to make pdp work perfectly, minAvailable needs to be scheduled based on pp, the logic is like what we talk about FederationHPA. @chaunceyjiang , what u think?

@a7i
Copy link
Contributor Author

a7i commented Dec 30, 2022

At present, kamrada can only create this pdp(minAvailable: 10) in all member clusters, right? @jwcesign

I think yes, if want to make pdp work perfectly, minAvailable needs to be scheduled based on pp, the logic is like what we talk about FederationHPA. @chaunceyjiang , what u think?

Confirming that this is the case using v1.3.3
Currently, the percentage or number (min or max) basically creates the same resource in all member clusters, regardless of the weighted/divided configuration of pp.

@jwcesign
Copy link
Member

Hi @a7i , in your use case, which behavior is expected? keep the same with the original pdb, or it should be scheduled based on the pp configuration?

@chaunceyjiang
Copy link
Member

I think yes, if want to make pdp work perfectly, minAvailable needs to be scheduled based on pp, the logic is like what we talk about FederationHPA. @chaunceyjiang , what u think?

Good idea, agree with you.

@a7i
Copy link
Contributor Author

a7i commented Dec 30, 2022

@jwcesign My organization almost exclusively uses maxUnavailable with a percentage (e.g. 20%). Minus a few exceptions like Kafka where replica count is always fixed.

I think that for percentages, keeping the same value is valid but there are edge cases to that even:

  • member-1: 2 replica Deployment
  • member-2: 2 replica Deployment
  • PDB maxUnavailable: 50%

Deleting a single Pod from member-1 will signal no more disruption allowed (even though at a global level, one more is still tolerable). We could report that Disruption is allowed in aggregatestatus but need to take into account what that means for operations that still happen at the member cluster level (e.g. draining a node and how it could get blocked).

I agree that handling PDB requires its own proposal (separate from HPA) but still feel that this PR can be reviewed since it's only aggregating status.

I (or someone from my org) would be more than happy to engage in discussions/designs for the proposal.

@jwcesign
Copy link
Member

jwcesign commented Jan 3, 2023

Yeah, it should work fine with percentages.

I am ok with this PR(only aggregating status), cc @XiShanYongYe-Chang for checking

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @a7i, thanks for your contribution!
Can you help add an E2E test in the https://github.com/karmada-io/karmada/blob/master/test/e2e/resource_test.go file for the PodDisruptionBudget resource?

@RainbowMango RainbowMango added this to the v1.5 milestone Jan 4, 2023
@a7i a7i force-pushed the pdb-status branch 2 times, most recently from f5bac51 to 9b8a2b5 Compare January 6, 2023 04:24
@a7i a7i requested review from XiShanYongYe-Chang and removed request for RainbowMango January 6, 2023 14:08
@a7i
Copy link
Contributor Author

a7i commented Jan 6, 2023

Just requested another review from @XiShanYongYe-Chang , didn't intend to remove.

cc: @RainbowMango

Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some nits, other parts look good to me, thanks a lot~

pkg/resourceinterpreter/defaultinterpreter/healthy_test.go Outdated Show resolved Hide resolved
test/e2e/framework/poddisruptionbudget.go Outdated Show resolved Hide resolved
test/e2e/framework/poddisruptionbudget.go Outdated Show resolved Hide resolved
@karmada-bot
Copy link
Collaborator

@a7i: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/retest

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Signed-off-by: Amir Alavi <amiralavi7@gmail.com>
Copy link
Member

@XiShanYongYe-Chang XiShanYongYe-Chang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot.
/lgtm
Ask @RainbowMango for a review.

@karmada-bot karmada-bot added the lgtm Indicates that a PR is ready to be merged. label Jan 9, 2023
Copy link
Member

@RainbowMango RainbowMango left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@@ -98,8 +98,10 @@ const (
PersistentVolumeClaimKind = "PersistentVolumeClaim"
// PersistentVolumeKind indicates the target resource is a persistentvolume
PersistentVolumeKind = "PersistentVolume"
// HorizontalPodAutoscalerKind indicated the target resource is a horizontalpodautoscaler
// HorizontalPodAutoscalerKind indicates the target resource is a horizontalpodautoscaler
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice finding.

@karmada-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RainbowMango

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@karmada-bot karmada-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 9, 2023
@karmada-bot karmada-bot merged commit d4cee27 into karmada-io:master Jan 9, 2023
@RainbowMango
Copy link
Member

I just opened a task(karmada-io/website#287) to track the document updation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants