New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upstream merge 2020-02-26 #106
Upstream merge 2020-02-26 #106
Conversation
when we create a network policy for ingress (or egress) that contains only ipBlock fields and nothing else, we end up creating address_sets. for example: say we have 3 ingress rules and each rule only captures the ipBlock and nothing else, we end up creating 3 address_sets (since 3 ingress rules) and they don't have any IP addresses. this also results in creation of 6 ACL - 3 ACLs for the ipBlock CIDR and 3 ACLs with empty address_set. the fix in this commit reduces lot of unnecessary ACLs Fixes: openshift#718 Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
when a pod is deleted, its logical_switch_port is deleted and so is the corresponding entry from logicalPortUUIDCache. In the handleLocalPodSelectorAddFunc, we fail to get the UUID for the just deleted pod's logical switch port and we return early from the function without cleaning up various caches. Now, say the same pod is added back. All these caches will have stale data and we fail to add the newly added logical_switch_port to the port_group. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
network policy's logicalport cache gets out-of-sync on pod deletion
when we create a network policy for ingress (or egress) that contains only ipBlock fields and nothing else, we end up creating address_sets. for example: say we have 3 ingress rules and each rule only captures the ipBlock and nothing else, we end up creating 3 address_sets (since 3 ingress rules) and they don't have any IP addresses. this also results in creation of 6 ACL - 3 ACLs for the ipBlock CIDR and 3 ACLs with empty address_set. the fix in this commit reduces lot of unnecessary ACLs Fixes: openshift#718 Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
create address_sets only when required for network policy rules
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
today, ovnkube --init-master uses `hostname` whilst ovnkube --init-node used K8S_NODE. furthermore, this environment variable is required only for ovnkube daemons. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
this script has lot of things that are not right and it doesn' work, so stop delivering this file. for instance, running 'stop' command on a daemon from that container just kills the container and restarts the container. if you can't stop, then you can't start. lot of output that this script provides can be obtained from the host by inspecting /var/run/openvswitch or /var/log/openvswitch folders. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
miscellaneous cleanups to ovnkube.sh and yaml templates
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
use ${ovn_kubernetes_namespace} variable instead of 'ovn-kubernetes'
stop delivering ovn-debug.sh script
in the current code, we expand all of the addresses in a namespace and then perform a set operation against the address_set OVN NB table. this is very expensive. consider a nemespace with 100 pods and that a new pod is added to that namespace. Now we walk through all the 101 addresses for the namespace and set the 101 addresses against address_set OVN NB table instead of just adding the new address. on pod deletion, we do the same thing as well. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
Allow reading the PrevResult on DEL so that we can extract the pod's IP address (if available) at deletion time. Signed-off-by: Dan Williams <dcbw@redhat.com>
With recent changes to OVN to block untracked traffic like ARP, we now need to explicitly allow it in the default ACLs. Closes: openshift#1076 Signed-off-by: Tim Rozet <trozet@redhat.com>
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
this commit adds readiness probes for OVN NB/SB, ovn-controller, ovn-northd, ovs-vswitchd, and ovsdb-server Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
we added support for using a non-default ovn_encap_port of 6081. however, we didn't add support for the same in ovnkube.sh and exporting the value through environment variable in ovnkube-node container Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
both the NB/SB container runs the iptable rules to open their respective ports on which it listens for client connections. post commit 4284873 (fix for iptables issue in CentOS 8), we are seeing errors in the nb/sb db log files that chroot cannot change the directory to '/host' since we forgot to add volumes and volume mounts in ovnkube-db.yaml.j2 Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
this commit fixes small issue introduced by acbe1c5 (Support latest OVN). the current code always uses ovs-appctl even though ovn-appctl might be available. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
build the expected network policies map correctly
support specifying custom ovn_encap_port in ovnkube.sh
fix chroot's cannot change root directory to '/host' error
this api uses the incorrect path of ${OVN_RUN_DIR}/ovn-northd.ctl. however, the control file path for ovn-northd is of the format ${OVN_RUN_DIR}/ovn-northd.<pid>.ctl. Signed-off-by: Girish Moodalbail <gmoodalbail@nvidia.com>
/retest Please review the full test history for this PR and help us cut down flakes. |
27 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@dcbw: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@openshift/networking