Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Removing spec.replicas of the Deployment resets replicas count to single replica #67135
A Deployment spec with hardcoded
apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: critical-service name: critical-service spec: replicas: 100 ...
Later, we've decided to let the
diff --git a/deployment.yml b/deployment.yml index 339531d..f5a3e5f 100644 --- a/deployment.yml +++ b/deployment.yml @@ -5,5 +5,4 @@ metadata: name: critical-service name: critical-service spec: - replicas: 100 ...
After the updated spec was applied, Kubernetes ignored existing
What you expected to happen:
We expected that skipping
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
For us this feels that scaling down by default is an extremely dangerous behaviour. It makes sense to use the default of 1 replica for new resources, but not for existing resources that have thousands of replicas running.
@kirs: There are no sig labels on this issue. Please add a sig label.
A sig label can be added by either:
Note: Method 1 will trigger an email to the group. See the group list.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
defaults are applied for missing values on both create and update.
to let the replicas field be ignored by apply, omit it from the initial applied manifest
to stop replicas from being managed by apply, use
@liggitt thanks for the tip!
I understand that it comes from the default value of
My intention for opening this issue was more about starting a conversation about what's the best default behaviour for Kubernetes.
As for me and for my colleagues, it seems like scaling down by default is an extremely dangerous behaviour. It makes sense to use the default of 1 replica for new resources, but Kubernetes should not scale down existing resources that have thousands of replicas live.
I'll be looking forward to make a change for ReplicaSet controller to make this behaviour slightly safer and a bit less unexpected. Let me know what you think!
Just got hit by this too. Our CI environment does not currently set the replica value because we would rather manage this separately. So now it looks like we'll need to start managing this in the CI environment. Also the suggested work-around seems a bit strange. Is that equivalent of the first apply setting the replica size to 1 and then re-apply it back to 100 (for example).
Wouldn't that massively affect what the rolling update would look like?
In case anyone needs our fix it basically looks like this:
And our template kube-deploy.template.yaml has #REPLICA_COUNT in the proper place:
The reason I chose to put this in as #REPLICA_COUNT was basically to avoid error reports from VSCode since my Yaml files get red-underlines if I try to get too fancy.
Oh - forgot to mention, the primary reason we opted to not use kubectl rolling-update and use kubectl apply is because rolling-update primary can affect like 1 attribute, and the primary use-case is just the image name. We like the ability to control mounts, RAM, variables, etc.. with each git push. This makes our CI environment easier when we do have to make such changes.