-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix update deployment when there is a change in replicas #715
Conversation
The following is the coverage report on the affected files.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@@ -451,20 +454,20 @@ func Test_reconcileDeployment(t *testing.T) { | |||
name: "deployment-replica-update", | |||
startResources: test.Resources{ | |||
Namespaces: []*corev1.Namespace{namespaceResource}, | |||
EventListeners: []*v1alpha1.EventListener{eventListener1}, | |||
Deployments: []*appsv1.Deployment{deployment3}, | |||
EventListeners: []*v1alpha1.EventListener{eventListener3}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One issue here is that before we had spec.replicas
in the EL itself, users had to manually update the replica on the deployment itself. And we had code and this test to verify that we do not overwrite any changes to the number of replicas on an EL. So, this is a change in our behavior. I am thinking we can either:
-
Honor the old replica changes i.e. if someone changes the existing replica numbers in the deployment itself and
spec.Replicas
is not set on the EL, we should not set it back. This keeps the behavior backwards compatible -
Or we deprecate the existing behavior (not sure how many people depend on this behavior) and only honor EL's
spec.replicas
-- we just call this out in the release notes.
One thing I'm curious about (and don't know) is how this will work with Horizontal Pod Autoscalers. Do they change the deployment's spec.Replicas? In that case, if an HPA scales the deployment we do not want to inadvertently scale it back down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have the behavior where if el.spec.Replicas is set, then we track changes only in el.spec.Replicas and don't care what user set in deployment.
If el.spec.Replicas isn't set, then we don't change deployment replica and let k8s controller take care of it?
HPA uses deployment's number as you surmised. So design 2 will break them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya this is something need to verify with HPA
Thanks for pointing it out i will try and update here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing I'm curious about (and don't know) is how this will work with Horizontal Pod Autoscalers. Do they change the deployment's spec.Replicas? In that case, if an HPA scales the deployment we do not want to inadvertently scale it back down.
@dibyom Yes HPA uses deployment so with this PR change if HPA increases replicas on deployment whenever condition met But that will be again overridden by el.Spec.Replicas
always
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about below opinion on supporting replicas
-
Lets keep
spec.replicas
which can be set by the user and we will add a note in README that if user setsspec.replicas
as part of eventlistener yaml then it will override replicas values if user manually set bykubectl edit deployment
or by setting HPA
Which means ifspec.replicas
set by event listener yaml then it won't respects replicas values edited by user manually or through any other mechanism (ex: HPA) -
If user Don't set
spec.replicas
in eventlistener yaml then behavior will be same like earlier how it use to work before adding replicas field to eventlistener yaml
Few points from WG discussion
-
Don't allow
spec.replicas
as part of eventlistener yaml instead hardcode replicas to some number instead of1
-> With this approach if we hardcodereplicas
to say3
then if HPA configured with--min=2 --max=5
then again replicas number varies -
Some prior art / similar behavior for k8s addons - https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager
-> This is bit tricky to implement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for summarizing the discussion. It sounds good to me!
1353fed
to
e1e3588
Compare
The following is the coverage report on the affected files.
|
e1e3588
to
2f483a9
Compare
The following is the coverage report on the affected files.
|
/hold |
2f483a9
to
42372d5
Compare
The following is the coverage report on the affected files.
|
/approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dibyom The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
42372d5
to
4f94a64
Compare
/hold cancel |
The following is the coverage report on the affected files.
|
@@ -166,6 +166,8 @@ check out the guide on [exposing EventListeners](./exposing-eventlisteners.md). | |||
The `replicas` field is optional. By default, the number of replicas of EventListener is 1. | |||
If you want to deploy more than one pod, you can specify the number to this field. | |||
|
|||
**Note:** If user sets `replicas` field while creating eventlistener yaml then it won't respects replicas values edited by user manually or through any other mechanism (ex: HPA). | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dibyom Added a note as per the discussion here #715 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
/lgtm |
Changes
The changes fix update deployment operation when there is a change in replicas number.
With current code whenever user updates
replicas
as part of eventlistener update success but no change in behavior.Fix: #714
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Release Notes