You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
In case kubernetes-nmstate is deployed as past of bigger product within its namespace (like HCO) and filtering kubernet-nmstate daemonset is needed (like e2e test checking knmstate is up and running) we have to filter by pod name string or pod templates, adding the app=kubernetes-nmstate labels at daemonset make this much more convenient.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
NodeNetworkState on affected nodes (use kubectl get nodenetworkstate <node_name> -o yaml):
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
What happened:
In case kubernetes-nmstate is deployed as past of bigger product within its namespace (like HCO) and filtering kubernet-nmstate daemonset is needed (like e2e test checking knmstate is up and running) we have to filter by pod name string or pod templates, adding the app=kubernetes-nmstate labels at daemonset make this much more convenient.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
NodeNetworkState
on affected nodes (usekubectl get nodenetworkstate <node_name> -o yaml
):NodeNetworkConfigurationPolicy
:kubectl get pods --all-namespaces -l app=kubernetes-nmstate -o jsonpath='{.items[0].spec.containers[0].image}'
):nmcli --version
)kubectl version
):The text was updated successfully, but these errors were encountered: