New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
modified subpath configmap mount fails when container restarts #68211
Comments
minimal example:
deploy the pod, edit the configmap, wait for the 30 second sleep to complete and the container to restart, the restart will fail with the error above. |
I have the same configuration and reproduced this bug |
/sig storage |
Using findmnt on node with test-pod I can see
Updated configmap is actually placed at |
Issue is reproducable. k8s 1.9.10, docker 17.12.1-ce I have found another behaviour when using subPath: in /var/lib/kubelet/pods/( pod-string ) /, there are two places where the configmap file lives:
So whether you edit the configmap object OR the file within the pod, and then restart the pod - the two files will be different. In my repro, I'm editing the file within the container and then kill it. This causes the pod to get stuck in a crashloopbackoff. While this happens I'm tracking the following for the first occurrence: overlay2 filesystem:
this is the attempt to move the 'trackmefile' (configmap's file with data) to the new container, but here it's empty. it doesn't gets any content, and the directory is deleted after about 15 seconds, when the pod crashloops again. docker daemon logs:
It tries to mount the file from the kubelet daemon logs:
same message is printed for kuberuntime_manager.go:734 , pod_workers.go:186, kuberuntime_manager.go:514 kubelet/pods/( podid )/ filesystem:
The same series of events repeat every time the pod is crashlooping. I can also repro this on minikube v1.11.3 with cri-o and containerd alike. |
this seems related to: |
We meet this error also when using secret subpath. We use kubernetes v1.11.0 and docker 18.03. |
I saw this issue with kubernetes v1.12.4 and docker 18.09.0. (I opened #72699 which I will now close as a dupe) |
Also have this issue with v1.12.2, haven't had it before with < 1.9 I believe very annoying! |
I have the same issue with docker 1.11.5 using AKS. apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
namespace: test-cfg
data:
common1.json: |
{
"logger": {
"enabled": true,
"level": "info",
"extreme": false,
"stream": "stdout"
}
}
common2.json: |
{
"test": 4
}
---
apiVersion: v1
kind: Pod
metadata:
name: issue
namespace: test-cfg
spec:
volumes:
- name: conf-volume
configMap:
name: test-configmap
containers:
- name: test
image: ubuntu:bionic
command: ["sleep", "30"]
resources:
requests:
cpu: 100m
volumeMounts:
- name: conf-volume
mountPath: /etc/common1.json
subPath: common1.json
- name: conf-volume
mountPath: /etc/common2.json
subPath: common2.json
---
apiVersion: v1
kind: Pod
metadata:
name: working
namespace: test-cfg
spec:
volumes:
- name: conf-volume
projected:
sources:
- configMap:
name: test-configmap
items:
- key: common1.json
path: common1.json
- key: common2.json
path: common2.json
containers:
- name: test-container
image: ubuntu:bionic
imagePullPolicy: "Always"
resources:
requests:
cpu: 100m
volumeMounts:
- name: conf-volume
mountPath: /test
command: ["sleep", "30"] With this configuration , Issue pod will not restart, instead working pod will work. |
docker fixed this bug in the 18.06.03-ce , but I also saw this issue with kubernetes v1.13.2 and docker 18.06.03-ce |
When kuryr-config ConfigMap gets edited and kuryr-daemon pod gets restarted we seem to suffer from bug [1]. As it's still open, this commit works it around by removing subPath option used when mounting kuryr.conf. Instead projected volumes are used for both kuryr-controller and kuryr-cni pods. [1] kubernetes/kubernetes#68211
When kuryr-config ConfigMap gets edited and kuryr-daemon pod gets restarted we seem to suffer from bug [1]. As it's still open, this commit works it around by removing subPath option used when mounting kuryr.conf. Instead projected volumes are used for both kuryr-controller and kuryr-cni pods. [1] kubernetes/kubernetes#68211
I get the same issue in openshit 1.13, kubernetes 1.11.0 Jul 01 18:45:48 paas-node010002.99bill.com origin-node[54267]: E0701 18:45:48.898167 54267 remote_runtime.go:209] StartContainer "c442998c576878395b30d2c831c3d42fbdbf75171b21a22ccde0e1141fdd3671" from runtime service failed: rpc error: code = Unknown desc = failed to start container "c442998c576878395b30d2c831c3d42fbdbf75171b21a22ccde0e1141fdd3671": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused "rootfs_linux.go:54: mounting \"/var/lib/origin/openshift.local.volumes/pods/41a4791b-5470-11e9-b356-005056b6696f/volume-subpaths/start/smsfinal-gzdlinfo-gateway/2\" to rootfs \"/var/lib/docker/overlay2/0f3564390916e039400e747e369c869db52cf32f4acb1aeb27e4f8ff961ab8a3/merged\" at \"/var/lib/docker/overlay2/0f3564390916e039400e747e369c869db52cf32f4acb1aeb27e4f8ff961ab8a3/merged/opt/oracle/start.sh\" caused \"no such file or directory\""" |
From time to time in the gate we suffer from Kubernetes/Docker bug [1]. As it seems to still be open, we can work it around by removing usage of subPath property of volumeMounts attached to Kuryr pods and this commit does so. Besides that it removes possibility of providing different kuryr.conf for kuryr-controller and kuryr-daemon as this shouldn't be required as we don't support running without kuryr-daemon anymore. [1] kubernetes/kubernetes#68211 Closes-Bug: 1833228 Change-Id: I2465bc45324482cc4ab32a1367ab08f34ce28b1c
* Update kuryr-kubernetes from branch 'master' - Merge "Remove subPaths when mounting Kuryr pods volumes" - Remove subPaths when mounting Kuryr pods volumes From time to time in the gate we suffer from Kubernetes/Docker bug [1]. As it seems to still be open, we can work it around by removing usage of subPath property of volumeMounts attached to Kuryr pods and this commit does so. Besides that it removes possibility of providing different kuryr.conf for kuryr-controller and kuryr-daemon as this shouldn't be required as we don't support running without kuryr-daemon anymore. [1] kubernetes/kubernetes#68211 Closes-Bug: 1833228 Change-Id: I2465bc45324482cc4ab32a1367ab08f34ce28b1c
Because of a kubernetes bug [0] when a container which is mounted with the subpath option, the configmap is changed and then the container restarts the mounting of the configmap fails. This PS uses the projected key for volume definitions as a workaround. [0] kubernetes/kubernetes#68211 Change-Id: I6820a0f963c5b28e1674ea58214ffc86009db4dd
From time to time in the gate we suffer from Kubernetes/Docker bug [1]. As it seems to still be open, we can work it around by removing usage of subPath property of volumeMounts attached to Kuryr pods and this commit does so. Besides that it removes possibility of providing different kuryr.conf for kuryr-controller and kuryr-daemon as this shouldn't be required as we don't support running without kuryr-daemon anymore. [1] kubernetes/kubernetes#68211 Closes-Bug: 1833228 Change-Id: I2465bc45324482cc4ab32a1367ab08f34ce28b1c
Issue seen on k8s 1.14.3, docker 18.06.3-ce, CoreOs Container Linux 2135.5.0 with subPath from secret in the same directory where another secret has a subPath |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Just had this in production, anyone knows how i can fix this ? -> Deleted the container as advised above... kubectl delete pod |
To work around the problem ensure that your deployments recreate the pods when you configmaps change, for example by adding a checksum of the configmaps to the annotations of the deployment/statefulset pod template. |
We have exactly the same problem even without the modification of configmap. Removed subpath and mounted configmap to separate dir and the problem is gone. |
We are also experiencing this issue (GKE with k8s 1.15 & 1.16).
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#configmapvolumesource-v1-core |
Can reproduce the issue with the example from above in three different environments.
Steps to reproduce:
using |
any progress? |
Happened for me again today with k8s 1.17 |
nagstaku: Please read the task closing reason. It was fixed by #89629 and is part of k8s v1.19.0-rc.0 or newer. Until then, use #68211 (comment) workaround. |
has anyone confirmed whether the fix works when using containerd directly on current kubernetes? It doesn't seem to work on 1.18.1. |
I think you should check the latest release version.
在 2020年8月7日 +0800 06:08,Chris Friesen <notifications@github.com>,写道:
… has anyone confirmed whether the fix works when using containerd directly on current kubernetes? It doesn't seem to work on 1.18.1.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
We use 1.16 also meet this problem~ |
We had this issue in AWS EKS today Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} |
Any reason this was closed? Looks like it's still an issue. |
It was closed because #89629 was submitted against the development branch. But you're still going to see the problem on older versions unless a fix is backported. And as I mentioned in #93691 I think the backport to 1.18.1 was buggy, which makes me wonder about other versions. |
We still have crashlooping pods when the container exits (e.g. when the process is killed manually or by exception). Fabian Ruff found a bug, that should explain the behavior: kubernetes/kubernetes#68211 (comment) This bug also contains a workaround, that's implemented here: use a projected configMap instead of mounting directly from the original ones. This way, an updated configMap will not be seen by the restarting container and thus will not lead to the crashloop we've seen. CCM-9905
Is there a better solution in versions before v1.19? |
Ref: kubernetes/kubernetes#22368 kubernetes/kubernetes#68211 Signed-off-by: Luong Vo <vo.tran.thanh.luong@gmail.com>
We still have crashlooping pods when the container exits (e.g. when the process is killed manually or by exception). Fabian Ruff found a bug, that should explain the behavior: kubernetes/kubernetes#68211 (comment) This bug also contains a workaround, that's implemented here: use a projected configMap instead of mounting directly from the original ones. This way, an updated configMap will not be seen by the restarting container and thus will not lead to the crashloop we've seen. CCM-9905
We still have crashlooping pods when the container exits (e.g. when the process is killed manually or by exception). Fabian Ruff found a bug, that should explain the behavior: kubernetes/kubernetes#68211 (comment) This bug also contains a workaround, that's implemented here: use a projected configMap instead of mounting directly from the original ones. This way, an updated configMap will not be seen by the restarting container and thus will not lead to the crashloop we've seen. CCM-9905
/kind bug
What happened:
When a container uses a configmap which is mounted with the
subPath
option, the configmap is changed and then the container (but not the pod) restarts the mounting of the configmap fails:(The pod this happens on consists of multiple containers, I have not tested yet if it also happens in a single container pod.)
One has to delete the pod to fix the problem.
Environment:
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
coreos 1800.7.0
docker 18.03.1
The text was updated successfully, but these errors were encountered: