Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manually edited values via kubectl are not updated on the next helm upgrade run #4654

Closed
blakestoddard opened this issue Sep 17, 2018 · 3 comments

Comments

@blakestoddard
Copy link

blakestoddard commented Sep 17, 2018

Summary If you update a Helm-managed value directly with kubectl scale, your next helm upgrade run will not alter that value.

Steps:

  1. Create a new Helm chart via helm create test-app
  2. Install that newly generated chart.
  3. Verify that you currently have 1 replica on the deployment:
$ kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
test-app   1         1         1            1           2h
  1. Scale that deployment via kubectl scale --replicas=5 deployment/test-app
  2. Verify that your deployment has scaled:
$ kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
test-app   5         5         5            5           2h
  1. Run helm upgrade to get the deployment back to the initial state (replicas=1)
$ helm upgrade test-app .
Release "test-app" has been upgraded. Happy Helming!
LAST DEPLOYED: Mon Sep 17 14:48:40 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME      AGE
test-app  2h

==> v1beta2/Deployment
test-app  2h

==> v1/Pod(related)

NAME                       READY  STATUS   RESTARTS  AGE
test-app-8647c6f46d-k4v6b  1/1    Running  0         54s
test-app-8647c6f46d-l2xj6  1/1    Running  0         54s
test-app-8647c6f46d-q59fr  1/1    Running  0         54s
test-app-8647c6f46d-s9km8  1/1    Running  0         2h
test-app-8647c6f46d-vg5gq  1/1    Running  0         54s
  1. Note that the replica count in the deployment has not actually changed:
$ kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
test-app   5         5         5            5           2h

I've done some poking around with some additional c.Log statements in a few different places (createPatch(), updateResource(), update()) and from what I can tell, whatever data store Helm is pulling the "current" config from (the one to diff the newly generated config against) is pulling the state of the resources at the time of the last helm upgrade (or install) rather than the current state that would be exposed with the Kubernetes API. The way around this issue is to change your Helm template to reflect your newly changed value (the updated replicas count, etc.), but I believe that this bug may be affecting more than just replica counts (it may be related to #1873).

If I add a few additional log statements to createPatch() to print out the marshaled data, I can see in the Tiller logs that an existing object without the manual kubectl change is being compared to the generated object using the current state of the template (note that replicas in "Old data" is 1, not 5 [which is what I'd expect it to be so that the patch is actually applied to bring it back to 1]):

[kube] 2018/09/17 18:48:40 Old data: {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"labels":{"app":"test-app","chart":"test-app-0.1.0","heritage":"Tiller","release":"test-app"},"name":"test-app","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"test-app","release":"test-app"}},"template":{"metadata":{"labels":{"app":"test-app","release":"test-app"}},"spec":{"containers":[{"image":"nginx:stable","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/","port":"http"}},"name":"test-app","ports":[{"containerPort":80,"name":"http","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/","port":"http"}},"resources":{}}]}}}}
[kube] 2018/09/17 18:48:40 New data: {"apiVersion":"apps/v1beta2","kind":"Deployment","metadata":{"labels":{"app":"test-app","chart":"test-app-0.1.0","heritage":"Tiller","release":"test-app"},"name":"test-app","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"test-app","release":"test-app"}},"template":{"metadata":{"labels":{"app":"test-app","release":"test-app"}},"spec":{"containers":[{"image":"nginx:stable","imagePullPolicy":"IfNotPresent","livenessProbe":{"httpGet":{"path":"/","port":"http"}},"name":"test-app","ports":[{"containerPort":80,"name":"http","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/","port":"http"}},"resources":{}}]}}}}
-func createPatch(target *resource.Info, current runtime.Object) ([]byte, types.PatchType, error) {
+func createPatch(target *resource.Info, current runtime.Object, c *Client) ([]byte, types.PatchType, error) {
        oldData, err := json.Marshal(current)
+       c.Log("Old data: %s", oldData)
        if err != nil {
                return nil, types.StrategicMergePatchType, fmt.Errorf("serializing current configuration: %s", err)
        }
        newData, err := json.Marshal(target.Object)
+       c.Log("New data: %s", newData)

Output of helm version:

$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}

Output of kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-19T15:02:56Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.6-gke.2", GitCommit:"384b4eaa132ca9a295fcb3e5dfc74062b257e7df", GitTreeState:"clean", BuildDate:"2018-08-15T00:10:14Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE, but I've replicated this with Docker's built-in Kubernetes support on macOS.

@bacongobbler
Copy link
Member

dupe of #1873

@Renz2018
Copy link

Renz2018 commented Jul 17, 2020

@bacongobbler It's different!
this issues:

  1. helm install chart
  2. kubectl edit deploy(resource managed by chart)
  3. helm update by not reset step2's change value by chart's value

issues 1873:

  1. helm install chart
  2. edit chart files
  3. helm update not work

@shaharr-ma
Copy link

Any update on this one ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants