New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help needed, A how-to around istio "0.5.1" + cilium "v1.0.0-rc5" #3010
Comments
@brant4test You mentioned that the cilium pods are restarting. Can you attach the full logs and also |
Investigating similar issues: |
etcd does not support watching on a compacted revision and will error out. Fortunately etcd tells us the minimum compact revision that we can watch, therefore, recreate the watcher with the provided minimum revision. Fixes: #3010 Signed-off-by: Thomas Graf <thomas@cilium.io>
Reported/asked upstream etcd-io/etcd#9386 |
etcd does not support watching on a compacted revision and will error out. Do a fresh get on the latest revision and restart the watcher. In order to continue maintaining the proper order of events, a local cache is introduced. The ListDone signal is only emitted once at the beginning. On ReList, deletion events are sent for keys which can no longer be found. Fixes: #3010 Signed-off-by: Thomas Graf <thomas@cilium.io>
etcd does not support watching on a compacted revision and will error out. Do a fresh get on the latest revision and restart the watcher. In order to continue maintaining the proper order of events, a local cache is introduced. The ListDone signal is only emitted once at the beginning. On ReList, deletion events are sent for keys which can no longer be found. Fixes: #3010 Signed-off-by: Thomas Graf <thomas@cilium.io>
@tgraf Great! so "cilium v1.0.0-rc5 pods restart with istio 0.5.1" issue has been solved? :) |
@brant4test You should be able to run the current Cilium versions in combination with any Istio version (without mTLS) without modifying the Istio proxy images. |
@brant4test The etcd errors you have seen should also be resolved correctly if you are using the image |
etcd does not support watching on a compacted revision and will error out. Do a fresh get on the latest revision and restart the watcher. In order to continue maintaining the proper order of events, a local cache is introduced. The ListDone signal is only emitted once at the beginning. On ReList, deletion events are sent for keys which can no longer be found. Fixes: #3010 Signed-off-by: Thomas Graf <thomas@cilium.io>
kubectl -n kube-system describe pod cilium.txt
|
@brant4test I'm seeing the following in the text file you attached:
|
@tgraf It's based on a stack of kubespray “master 5aeaa24” + istio "0.5.1" + cilium "v1.0.0-rc6".
Any tips? thanks! |
@ianvernon thanks for your reply. I've seen that before, no clue what leads to DiskPressure. any suggestions? thanks! |
@brant4test this is one of the NodeConditions per Kubernetes Documentation.
|
@brant4test Can you ssh into the node in question and see if you have a large file name |
@ianvernon Thanks a lot for your help. from your doc, I think this is the reason why daemonset pods keeps restarting.
@tgraf seems like no cilium-envoy.log exists. and a good news is all nodes and pods are stable now.
except one concern about two Events Warning? Are they also issues? or can be ignored? thanks!
|
Great! We saw some instances with excessive log spamming that we addressed in the meantime. I wanted to make sure that you you were not affected by this.
If Cilium is healthy then it is a false negative but we should not ignore them. I noticed that that the timeout is only 1 second. We will look into this and get back to you. |
@tgraf Will Cilium |
Help needed, A how-to around istio "0.5.1" + cilium "v1.0.0-rc5"
integration of istio and cilium with less maintenance cost (future upgrade)
I'm following this guide to create a stack of kubespray “master 5aeaa24” + istio "0.5.1" + cilium "v1.0.0-rc5".
https://github.com/kubernetes-incubator/kubespray/tree/master/contrib/terraform/aws
Stack creation was successful (ansible PLAY RECAP), but then cilium pods seems got re-created frequently (including changing name of pods, cilium-***** and becoming to Pending ), causing application services to flip a lot as well.
General Information
cilium version
)uname -a
)kubectl version
, Mesos, ...)How to reproduce the issue
Feature Requests
a how-to around istio "0.5.1" + cilium "v1.0.0-rc5".
Thanks! :)
The text was updated successfully, but these errors were encountered: