-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds CNI check for table ls_out_port_security flows #1760
Conversation
go-controller/pkg/cni/ovs.go
Outdated
return false, nil | ||
for _, query := range queries { | ||
// ovs-ofctl dumps error on stderr, so stdout will only dump flow data if matches the query. | ||
stdout, err := ofctlExec("dump-flows", "br-int", query) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure batching the query will work here with ovs-ofctl? But even if it works, it can give you false positive, in a scenario where only one table has a flow but second table doesn't. Stdout will still have some flow as a output, but that doesn't necessary mean that both the tables has the required flow. So seems like you need to separate the query here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the 2nd iteration didn't have an entry or failed, then stdout would still be set to the output of that command. If either of those checks fail for any query, it will return false and the function will start over until all queries are successful.
OVS table 48 is SB table ls_out_stateful This is in September 2020 OVN release. I think you just need to check for table 50 for the MAC address?
If the OpenFlow rules are written serially starting from Table-1 to all the way up to Table-50, then wouldn't it suffice to just check for |
@girishmg Sorry, this is my fault, I had suggested table=48 for ls_out_port_sec_ip to @trozet based on ovn20.06. I had forgotten that in ovn20.09 there's a new input/output stage being added to optimize in/out ACLs so indeed ls_out_port_sec_ip will become table=49. Unfortunately there's no guarantee that new stages/tables will not be added or removed in the future in OVN. Until we find a better way to ensure that all latest NB changes are propagated to OVS (e.g., suggested here) it might be better to use a more flexible check. What about leaving the table id out or checking a range of tables instead of a single table?
This might not always be the case. Flows for different tables are different records in the Logical_Flow table in the SB. In this specific case flows might be created in the same ovn-northd -> SB transaction but there's no guarantee that these records will be sent ordered by table id to ovn-controller and in turn to OVS. I need to double check but i think this can cause that flows for table=50 get installed before flows for table=49. |
I dont think we can dump and filter on all flows. The perf impact will be too high. I'll modify the patch to try multiple tables. In the future we need a better signal to know pod is fully wired. |
30f460e
to
94dbc8d
Compare
LGTM. |
/hold looks like something with ipv6 didnt match correctly |
2bd117f
to
b9bc488
Compare
/retest |
There is a race condition when we return that CNI ADD is complete where a pod that immediately starts and tries to access the network may fail, because not all OVS flows are present. Specifically we are missing the check for flows for ls_out_port_security, table 48,49. Today we only check for flows in table 9 port security, and other required flows may come a few seconds later. Signed-off-by: Tim Rozet <trozet@redhat.com>
b9bc488
to
79d5d8b
Compare
lgtm |
@dceara will it create performance issue if northd send the records by table-id? I think having a deterministic order here would be helpful for consumers like ovn-k8s to write better logic and probably the performance impact is higher at ovn-k8s level, and they need to fire a transaction per pod. thoughts? @trozet @abhat thoughts?
|
@vishnoianil northd creates records in the SB DB, it has no control about how SB sends the updates to ovn-controller. Records are hashed in the DB according to the record UUID which is generated at transaction commit time (from northd to SB). According to the monitor conditions issued by ovn-controller, ovsdb-server (SB) sends the updates to ovn-controller (in the order they're stored in memory. In my opinion the best solution is indeed to have transaction per pod in ovn-k8s and have a working mechanism in OVN to ensure that a transaction is completely processed by ovn-controller on a given node. This is what this OVN RFE is about. |
Dockerfile: build both RHEL8 and RHEL9 shims
There is a race condition when we return that CNI ADD is complete where
a pod that immediately starts and tries to access the network may fail,
because not all OVS flows are present. Specifically we are missing the
check for flows for ls_out_port_security, table 48. Today we only check
for flows in table 9 port security, and table 48 flows may come a few
seconds later.
Signed-off-by: Tim Rozet trozet@redhat.com