windows-exporter pods keep restarting until wins-upgrader is fully deployed on new 2.5.7 win node #31842
Labels
area/windows
kind/bug-qa
Issues that have not yet hit a real release. Bugs introduced by a new feature or enhancement
kind/enhancement
Issues that improve or augment existing functionality
QA/XS
Milestone
What kind of request is this (question/bug/enhancement/feature request): bug
Steps to reproduce (least amount of steps as possible):
ami-008ec03d9035c8ee7
)dev-v2.5
branchrancher-wins-upgrader:0.0.100-rc00
with the following override:rancher-monitoring*:9.4.204-rc07
on the cluster into System Project:cluster.provider.rke
from Cluster Type dropdown in General tabwindowsExporter.enabled=true
cattle-monitoring-system
namespace are readyrancher-monitoring-windows-exporter
DS on the new node.Result:
The pods
rancher-monitoring-windows-exporter*
are keep restarting probably untilwins-upgrader
is fully installed on the new node(s). I was adding 3 new nodes and each of them had the same symptoms:After a while all exporter pods are running and everything works as expected.
Other details that may be helpful:
exporter-node
container fromrancher-monitoring-windows-exporter
DS (new node):noop
container fromwins-upgrader-default
DS started at 12:25:57 - taken from webui (new node)It appears it is most probably caused by a race condition between
wins-upgrader
andwindows-exporter
deployments - at least according to the timestamps - In my case exporter attempted to start before wins-upgrader was ready. I have no further evidence only the timestamps. If the default period for pod restarting is ~30s then it would make a sense.Environment information
rancher/rancher
/rancher/server
image tag or shown bottom left in the UI): 2.5.7Cluster information
kubectl version
): : v1.20.4The text was updated successfully, but these errors were encountered: