You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
When applying a policy that deals with primary nic and to test rollback policy was a success state but enacment was progressing and ended with failure (sice a rollback was expected) the issue is related to node counting for policy conditions, since only ready nodes are being counted and playing with primary nic can render one node temporally at NotReady, the comparation between number of nodes and not matching enactments was passing.
To fix this we have to add another probe after apply and after rollback to check that Node is at Ready state so we block there until node is ok again.
What you expected to happen:
Policy to be a Failure state after a rollback from a bad primary nic change.
How to reproduce it (as minimally and precisely as possible):
Apply a bad primary nic policy at multinic env, it has to be exercise multiple time until race appear.
Anything else we need to know?:
Environment:
NodeNetworkState on affected nodes (use kubectl get nodenetworkstate <node_name> -o yaml):
What happened:
When applying a policy that deals with primary nic and to test rollback policy was a success state but enacment was progressing and ended with failure (sice a rollback was expected) the issue is related to node counting for policy conditions, since only ready nodes are being counted and playing with primary nic can render one node temporally at NotReady, the comparation between number of nodes and not matching enactments was passing.
To fix this we have to add another probe after apply and after rollback to check that Node is at Ready state so we block there until node is ok again.
What you expected to happen:
Policy to be a Failure state after a rollback from a bad primary nic change.
How to reproduce it (as minimally and precisely as possible):
Apply a bad primary nic policy at multinic env, it has to be exercise multiple time until race appear.
Anything else we need to know?:
Environment:
NodeNetworkState
on affected nodes (usekubectl get nodenetworkstate <node_name> -o yaml
):NodeNetworkConfigurationPolicy
:kubectl get pods --all-namespaces -l app=kubernetes-nmstate -o jsonpath='{.items[0].spec.containers[0].image}'
):nmcli --version
)kubectl version
):The text was updated successfully, but these errors were encountered: