Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl replaced configmap built from directory does not show new content in a container #25418

Closed
sbezverk opened this issue May 10, 2016 · 8 comments
Labels
area/app-lifecycle kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@sbezverk
Copy link
Contributor

Running kubernetes 1.2.4. I build configmap from directory, where I store files required by the application using this command:
kubectl --server=http://10.57.120.10:8080 create configmap keystone-config --from-file=config/

in some cases I need to add file to the set of config files. So get yaml version of configmap, change it manually and then use kubectl replace to update it.
kubectl --server=http://10.57.120.10:8080 get configmap keystone-config -o yaml > keystone-config.yaml
change keystone-config.yaml following this file format
kubectl --server=http://10.57.120.10:8080 replace -f keystone-config.yaml

When I check the container I do not see any changes at the point where this configmap is mounted, but if I restart container then I see a new file which I manually added.

Please let me know if it is a bug or this way of changing configmap is not supported.

Thank you

Serguei

@fabioy fabioy added kind/bug Categorizes issue or PR as related to a bug. area/app-lifecycle sig/node Categorizes an issue or PR as relevant to SIG Node. labels May 11, 2016
@vefimova
Copy link
Contributor

vefimova commented Jul 5, 2016

Hi, I'm working on this one

@vefimova
Copy link
Contributor

vefimova commented Jul 8, 2016

Hi, @sbezverk I didn't manage to reproduce bug following the steps you described on 1.2.4. Could you please provide more details for reproduction or confirm it was a confusion due to delayed update?

So what I did:

ConfigMap formed from dir:

# ls -la ../config-dir/
total 16
drwxr-xr-x 2 root root 4096 Jul  8 13:28 .
drwxr-xr-x 4 root root 4096 Jul  8 14:45 ..
-rw-r--r-- 1 root root   13 Jul  8 13:28 file1
-rw-r--r-- 1 root root   13 Jul  8 13:28 file2
# cat ../config-dir/file1

Hi i'm file1

# cat ../config-dir/file2

Hi i'm file2

# kubectl create configmap test-config --from-file=../config-dir/

configmap "test-config" created

# kubectl get configmap/test-config -o yaml > ../test-config.yaml
# cat test-config.yaml
apiVersion: v1
data:
  file1: |
    Hi i'm file1
  file2: |
    Hi i'm file2
kind: ConfigMap
metadata:
  creationTimestamp: 2016-07-08T14:42:20Z
  name: test-config
  namespace: default
  resourceVersion: "1954037"
  selfLink: /api/v1/namespaces/default/configmaps/test-config
  uid: 2ce38933-451a-11e6-8ed8-5254000d0846

# cat podConfigmap.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dapi-test-pod
spec:
  containers:
    - name: test-container
      image: gcr.io/google_containers/busybox
      command: [ "/bin/sh", -c, 'while :; do cat /etc/config/file1; sleep 1; done']
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        name: test-config
  restartPolicy: Never
# kubectl create -f podConfigmap.yaml

pod "dapi-test-pod" created

Ok, now let's change configMap and replace it:

# cat test-config.yaml
apiVersion: v1
data:
  file3: |
    Hi i'm file3
    ;))))!!!!
  file1: |
    Hi i'm file1 !!!
  file2: |
    Hi i'm file2 !!!
kind: ConfigMap
metadata:
  creationTimestamp: 2016-07-08T14:42:20Z
  name: test-config
  namespace: default
  resourceVersion: "1954037"
  selfLink: /api/v1/namespaces/default/configmaps/test-config
  uid: 2ce38933-451a-11e6-8ed8-5254000d0846
# kubectl replace -f test-config.yaml

configmap "test-config" replaced

After replacement it will take some time kubelet to update files in mount path (~1 min):

# kubectl exec dapi-test-pod -- ls -la /etc/config/
total 4
drwxrwxrwt    3 root     root           140 Jul  8 14:45 .
drwxr-xr-x    7 root     root          4096 Jul  8 14:43 ..
drwxr-xr-x    2 root     root           100 Jul  8 14:45 ..7987_08_07_09_45_56.099037854
lrwxrwxrwx    1 root     root            31 Jul  8 14:45 ..data -> ..7987_08_07_09_45_56.099037854
lrwxrwxrwx    1 root     root            12 Jul  8 14:43 file1 -> ..data/file1
lrwxrwxrwx    1 root     root            12 Jul  8 14:43 file2 -> ..data/file2
lrwxrwxrwx    1 root     root            12 Jul  8 14:45 file3 -> ..data/file3
#kubectl exec dapi-test-pod -- cat /etc/config/file1

Hi i'm file1 !!!

#kubectl exec dapi-test-pod -- cat /etc/config/file3

Hi i'm file3
;))))!!!!

@sbezverk
Copy link
Contributor Author

sbezverk commented Jul 8, 2016

@vefimova I will redo this test and let you know the result, hopefully it was just a mistake on my side. Thank you.

@yadzhang
Copy link

Can K8S provide an option for updating or not updating the configuration in the container while it's running? If we update configmap, but do not want to spread the configuration to all conatainers. K8S can provide an option when create app.

@tobilarscheid
Copy link

Hi,

I can't reproduce the bug as well. I did however come here by googling the issue, simply for the reason that updating the file in the container takes a few seconds after the config map has been updated. So just leaving this here in case someone ends up here as well!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 24, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

9 participants