New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CreateContainerError running lstat on namespace path #1927
Comments
Thanks for the report. Were you able to find a reproducer or gather more data around when it fails leading to this path? |
Nope, we have since switched to containerd due to this and other issues I've reported. You can close this if there's not enough information to investigate. |
I did just reproduce this in an environment that was still on CRI-O. The pod in question OOMKilled and then would not restart with CreateContainerError. So most likely the mongo pod above OOMKilled. Another thing I noticed is that
If you attempt to
|
Thanks for the info @steven-sheehy. I have an idea to try to reproduce it and see what's happening. Does kubelet show the older not-ready pods? You should be able to If it is ready, first stopp and then followed by rmp. For the stopping pod error, it indicates that the pod process has already exited so it must be for one of the NotReady pods. |
The NotReady pod above b26d8f861f68b shows up repeatedly in the kubelet logs. The kubelet log is filled with these type of errors for different pods:
|
Similar behaviour here: Output of crio --version: Additional environment details (AWS, VirtualBox, physical, etc.): Logs:
|
hi, reporting the same as @steven-sheehy here with crio 1.13.0 and kubelet 1.13.1 (without systemd)
|
@mcluseau Thanks! We are looking into a fix for this. |
I've observed some races in the interaction between CRI-O and the Kubelet. I think 6703d85 might solve some of them. Another potential fix is kubernetes/kubernetes#72105 |
@steven-sheehy @giuseppe @mrunalp Is this still an issue? |
the errors I could reproduce are fixed upstream, except for a Kubernetes patch that is still being discussed. @steven-sheehy have you had a chance to try again with an updated CRI-O? |
Sorry, I don't have the ability to test it. I can close it and we can reopen if needed. |
Reopening as the upstream PR is not yet merged - kubernetes/kubernetes#72105 |
I have same proble docker log
kubelet log
|
We are seeing the same problem now with cri-o 1.16.0 |
Indeed, I'm running into this with cri-o 1.16 too :( |
FWIW, this started occurring after a DaemonSet that had no limits had limited added (but really low ones). We bumped up the limits for it and it started working. Perhaps when the resources are so low in the container, you have those issues? |
I think it is still the issue in the Kubelet. I have used a static pod to reproduce it: kubernetes/kubernetes#72105 (comment) Do you easily reproduce the issue if you try something like I've done in the comment above? |
A friendly reminder that this issue had no activity for 30 days. |
Closing this issue since it had no activity in the past 90 days. |
Description
I have a MongoDB StatefulSet running fine for awhile, then for unknown reasons the pod restarts. When it attempts to start back up, this error starts occurring. This occurs only rarely as most times it starts up fine. After restarting the crio.service, the error goes away and the container creates successfully.
Steps to reproduce the issue:
Not sure
Describe the results you received:
Pod is never re-created and shows an error like CreateContainerError in
kubectl get pods
. The restarts are not increasing, it's just that error permanently.Describe the results you expected:
Container to be created
Additional information you deem important (e.g. issue happens only occasionally):
Output of
crio --version
:v1.11.10
Additional environment details (AWS, VirtualBox, physical, etc.):
Ubuntu 18.04
Kubernetes v1.11.4
VMware VM
The text was updated successfully, but these errors were encountered: