-
Notifications
You must be signed in to change notification settings - Fork 10.1k
Add growing member test scenario #20239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Nont <nont@duck.com>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nwnt The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@serathius Not sure if this is the same as what you have in mind. I probably missed a lot of things so definitely would love your feedback about it. |
@nwnt: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted filessee 37 files with indirect coverage changes @@ Coverage Diff @@
## main #20239 +/- ##
==========================================
- Coverage 69.18% 69.00% -0.18%
==========================================
Files 414 411 -3
Lines 34403 34273 -130
==========================================
- Hits 23800 23651 -149
- Misses 9209 9211 +2
- Partials 1394 1411 +17 Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
@@ -40,6 +40,10 @@ var ( | |||
MemberReplace Failpoint = memberReplace{} | |||
MemberDowngrade Failpoint = memberDowngrade{} | |||
MemberDowngradeUpgrade Failpoint = memberDowngradeUpgrade{} | |||
GrowMember Failpoint = &growMember{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not growing members here :P
Usually talk about "cluster" and not "members". And we call the process "scale up". Maybe ClusterScaleUp
?
@@ -304,3 +308,28 @@ func patchArgs(args []string, flag, newValue string) error { | |||
} | |||
return fmt.Errorf("--%s flag not found", flag) | |||
} | |||
|
|||
type growMember struct { | |||
currentSize int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which "current" you mean? Maybe initialSize
.
} | ||
|
||
func (f *growMember) Available(config e2e.EtcdProcessClusterConfig, member e2e.EtcdProcess, profile traffic.Profile) bool { | ||
return f.currentSize < f.targetSize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the tricky part. If you start from cluster of size 1, then the traffic will talk to the 1 initial member. I would recommend instead to start to from 3 members, remove 2 and add back 2. This is still tricky because StartNewProc
will allocate unique port numbers so we will need to work around that.
Profile: tp.Profile, | ||
Cluster: *e2e.NewConfig(clusterOfSize1Options...), | ||
}, | ||
TestScenario{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for this. As long the failpoint is registered in allFailpoints
it will be used. For testing I recommend just temporarily disabling other failpoints,.
Good work on the implementation, implementing a new failpoint is pretty tricky. Got an error Problem we are hitting with Unfortunately to merge this feature we will need to rewrite |
For #20137