You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary If you update a Helm-managed value directly with kubectl scale, your next helm upgrade run will not alter that value.
Steps:
Create a new Helm chart via helm create test-app
Install that newly generated chart.
Verify that you currently have 1 replica on the deployment:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test-app 1 1 1 1 2h
Scale that deployment via kubectl scale --replicas=5 deployment/test-app
Verify that your deployment has scaled:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test-app 5 5 5 5 2h
Run helm upgrade to get the deployment back to the initial state (replicas=1)
$ helm upgrade test-app .
Release "test-app" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Sep 17 14:48:40 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME AGE
test-app 2h
==> v1beta2/Deployment
test-app 2h
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
test-app-8647c6f46d-k4v6b 1/1 Running 0 54s
test-app-8647c6f46d-l2xj6 1/1 Running 0 54s
test-app-8647c6f46d-q59fr 1/1 Running 0 54s
test-app-8647c6f46d-s9km8 1/1 Running 0 2h
test-app-8647c6f46d-vg5gq 1/1 Running 0 54s
Note that the replica count in the deployment has not actually changed:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
test-app 5 5 5 5 2h
I've done some poking around with some additional c.Log statements in a few different places (createPatch(), updateResource(), update()) and from what I can tell, whatever data store Helm is pulling the "current" config from (the one to diff the newly generated config against) is pulling the state of the resources at the time of the last helm upgrade (or install) rather than the current state that would be exposed with the Kubernetes API. The way around this issue is to change your Helm template to reflect your newly changed value (the updated replicas count, etc.), but I believe that this bug may be affecting more than just replica counts (it may be related to #1873).
If I add a few additional log statements to createPatch() to print out the marshaled data, I can see in the Tiller logs that an existing object without the manual kubectl change is being compared to the generated object using the current state of the template (note that replicas in "Old data" is 1, not 5 [which is what I'd expect it to be so that the patch is actually applied to bring it back to 1]):
[kube] 2018/09/17 18:48:40 Old data: {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"labels":{"app":"test-app","chart":"test-app-0.1.0","heritage":"Tiller","release":"test-app"},"name":"test-app","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"test-app","release":"test-app"}},"template":{"metadata":{"labels":{"app":"test-app","release":"test-app"}},"spec":{"containers":[{"image":"nginx:stable","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/","port":"http"}},"name":"test-app","ports":[{"containerPort":80,"name":"http","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/","port":"http"}},"resources":{}}]}}}}
[kube] 2018/09/17 18:48:40 New data: {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"labels":{"app":"test-app","chart":"test-app-0.1.0","heritage":"Tiller","release":"test-app"},"name":"test-app","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"test-app","release":"test-app"}},"template":{"metadata":{"labels":{"app":"test-app","release":"test-app"}},"spec":{"containers":[{"image":"nginx:stable","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/","port":"http"}},"name":"test-app","ports":[{"containerPort":80,"name":"http","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/","port":"http"}},"resources":{}}]}}}}
Summary If you update a Helm-managed value directly with
kubectl scale
, your nexthelm upgrade
run will not alter that value.Steps:
helm create test-app
kubectl scale --replicas=5 deployment/test-app
helm upgrade
to get the deployment back to the initial state (replicas=1)I've done some poking around with some additional c.Log statements in a few different places (
createPatch()
,updateResource()
,update()
) and from what I can tell, whatever data store Helm is pulling the "current" config from (the one to diff the newly generated config against) is pulling the state of the resources at the time of the lasthelm upgrade
(orinstall
) rather than the current state that would be exposed with the Kubernetes API. The way around this issue is to change your Helm template to reflect your newly changed value (the updated replicas count, etc.), but I believe that this bug may be affecting more than just replica counts (it may be related to #1873).If I add a few additional log statements to
createPatch()
to print out the marshaled data, I can see in the Tiller logs that an existing object without the manualkubectl
change is being compared to the generated object using the current state of the template (note thatreplicas
in "Old data" is 1, not 5 [which is what I'd expect it to be so that the patch is actually applied to bring it back to 1]):Output of
helm version
:Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE, but I've replicated this with Docker's built-in Kubernetes support on macOS.
The text was updated successfully, but these errors were encountered: