New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dnsmask failed to create inotify #2709
Comments
Just for information, in order to finish upgrade we had to kill failing pod each time after next node were recreated. Since cops were reporting Once failing pod deleted, rolling update can be started again. |
Does updating this number fix the issue? We support a bunch of different operating systems and I am uncertain which OS's this impacts. If we can figure out a solution, please PR ;) |
Closing in favor of #2912 |
kops version: 1.6
kubernetes version: 1.6.1
Networking: canal
Cloud: AWS
Node age: 26d
Here are the logs we saw:
We upgraded the cluster, and the error is gone.
This looks like related to: kubernetes/kubernetes#32526
Kops indeed has this setting on the node:
Would it be beneficial to update this number? We can PR if necessary?
As a side note, since the beginning of the cluster, I can't tail pods logs with this error message:
I'm not sure it is related, but I though it is worth to mention.
As a second side note, inodes there are 4.33M free inodes.
The text was updated successfully, but these errors were encountered: