-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support PodDisruptionBudget resource in default interpreter #2997
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2997 +/- ##
==========================================
+ Coverage 38.58% 38.68% +0.09%
==========================================
Files 206 206
Lines 18820 18865 +45
==========================================
+ Hits 7261 7297 +36
- Misses 11129 11134 +5
- Partials 430 434 +4
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
Thanks for your PR @a7i , I have one question, for pdb, there is apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
minAvailable: 10
selector:
matchLabels:
app: zookeeper So this pdp will be created in all member clusters with |
Hi @a7i, thanks for doing this. |
@jwcesign As far as I know, that is how it is today. With or without this change. We are doing more testing on PDBs this week and I can circle back and let you know how it works with different propagation policies. |
@RainbowMango We are conducting experiments on how Karmada treats PDBs. First blocker was that PDB status is not aggregated. A |
At present, kamrada can only create this pdp( |
I think yes, if want to make pdp work perfectly, |
Confirming that this is the case using v1.3.3 |
Hi @a7i , in your use case, which behavior is expected? keep the same with the original pdb, or it should be scheduled based on the pp configuration? |
Good idea, agree with you. |
@jwcesign My organization almost exclusively uses I think that for percentages, keeping the same value is valid but there are edge cases to that even:
Deleting a single Pod from member-1 will signal no more disruption allowed (even though at a global level, one more is still tolerable). We could report that Disruption is allowed in aggregatestatus but need to take into account what that means for operations that still happen at the member cluster level (e.g. draining a node and how it could get blocked). I agree that handling PDB requires its own proposal (separate from HPA) but still feel that this PR can be reviewed since it's only aggregating status. I (or someone from my org) would be more than happy to engage in discussions/designs for the proposal. |
Yeah, it should work fine with percentages. I am ok with this PR(only aggregating status), cc @XiShanYongYe-Chang for checking |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @a7i, thanks for your contribution!
Can you help add an E2E test in the https://github.com/karmada-io/karmada/blob/master/test/e2e/resource_test.go
file for the PodDisruptionBudget
resource?
f5bac51
to
9b8a2b5
Compare
Just requested another review from @XiShanYongYe-Chang , didn't intend to remove. cc: @RainbowMango |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some nits, other parts look good to me, thanks a lot~
@a7i: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Amir Alavi <amiralavi7@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot.
/lgtm
Ask @RainbowMango for a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
@@ -98,8 +98,10 @@ const ( | |||
PersistentVolumeClaimKind = "PersistentVolumeClaim" | |||
// PersistentVolumeKind indicates the target resource is a persistentvolume | |||
PersistentVolumeKind = "PersistentVolume" | |||
// HorizontalPodAutoscalerKind indicated the target resource is a horizontalpodautoscaler | |||
// HorizontalPodAutoscalerKind indicates the target resource is a horizontalpodautoscaler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice finding.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: RainbowMango The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I just opened a task(karmada-io/website#287) to track the document updation. |
Signed-off-by: Amir Alavi amiralavi7@gmail.com
What type of PR is this?
What this PR does / why we need it:
/kind feature
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?: