drain failed #251
Closed
drain failed #251
Comments
We have created an issue in Pivotal Tracker to manage this: https://www.pivotaltracker.com/story/show/160559868 The labels on this github issue will be updated when the story is started. |
Hi @obeyler I created a small PR for your problem. It will run the pipeline and we will merge it. You can try to build kubo-release from that branch and try it on your environment |
It seems to work did you plan to integrate this pr in next release ? |
PR has been merged into our |
The fix is included in 0.23. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened:
drain of some worker failed during bosh update deploiement
https://github.com/cloudfoundry-incubator/kubo-release/blob/master/jobs/kubelet/templates/bin/drain.erb#26
during drain the command is launched :
kubectl drain 'vm-06ea0261-c8f5-410b-99d2-cf93a2d6fd76 vm-a0708eae-26fb-4bc9-8adb-dae5012563c1' --grace-period 10 --force --delete-local-data --ignore-daemonsets
with this result :
Error from server (NotFound): nodes "vm-06ea0261-c8f5-410b-99d2-cf93a2d6fd76 vm-a0708eae-26fb-4bc9-8adb-dae5012563c1" not found
because some node share same ip (see 192.168.245.204 ) => I don't know why this occured
What you expected to happen:
may the script drain should detect that two nodes share same ip and separate the drain command
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
we have a lot of Terminating Pod :error (env 80 ) may it can be a source of trouble for the nodes/ip
Environment:
bosh -d <deployment> deployment
):cfcr-deployment-21
bosh -e <environment> environment
):kubectl version
):aws
,gcp
,vsphere
):openstack/ FE Orange
The text was updated successfully, but these errors were encountered: