Skip to content

Commit

Permalink
doc: Fix Update Openflow Table numbers
Browse files Browse the repository at this point in the history
Openflow tables OFTABLE_REMOTE_OUTPUT, OFTABLE_LOCAL_OUTPUT
and OFTABLE_CHECK_LOOPBACK numbering changed, but documentation
was not updated.

Fixes: dd94f12 ("northd: MAC learning: Add logical flows for fdb.")

Signed-off-by: Xavier Simonart <xsimonar@redhat.com>
Signed-off-by: Numan Siddique <numans@ovn.org>
  • Loading branch information
simonartxavier authored and numansiddique committed Oct 6, 2021
1 parent 53f67bc commit 2caf5a2
Show file tree
Hide file tree
Showing 2 changed files with 55 additions and 55 deletions.
40 changes: 20 additions & 20 deletions controller/physical.c
Expand Up @@ -958,12 +958,12 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
|| ha_chassis_group_is_active(binding->ha_chassis_group,
active_tunnels, chassis))) {

/* Table 33, priority 100.
/* Table 38, priority 100.
* =======================
*
* Implements output to local hypervisor. Each flow matches a
* logical output port on the local hypervisor, and resubmits to
* table 34. For ports of type "chassisredirect", the logical
* table 39. For ports of type "chassisredirect", the logical
* output port is changed from the "chassisredirect" port to the
* underlying distributed port. */

Expand Down Expand Up @@ -1010,7 +1010,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,
put_load(zone_ids.snat, MFF_LOG_SNAT_ZONE, 0, 32, ofpacts_p);
}

/* Resubmit to table 34. */
/* Resubmit to table 39. */
put_resubmit(OFTABLE_CHECK_LOOPBACK, ofpacts_p);
}

Expand Down Expand Up @@ -1315,7 +1315,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,

} else if (!tun && !is_ha_remote) {
/* Remote port connected by localnet port */
/* Table 33, priority 100.
/* Table 38, priority 100.
* =======================
*
* Implements switching to localnet port. Each flow matches a
Expand All @@ -1332,7 +1332,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,

put_load(localnet_port->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p);

/* Resubmit to table 33. */
/* Resubmit to table 38. */
put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p);
ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
binding->header_.uuid.parts[0],
Expand All @@ -1344,7 +1344,7 @@ consider_port_binding(struct ovsdb_idl_index *sbrec_port_binding_by_name,

/* Remote port connected by tunnel */

/* Table 32, priority 100.
/* Table 38, priority 100.
* =======================
*
* Handles traffic that needs to be sent to a remote hypervisor. Each
Expand Down Expand Up @@ -1516,7 +1516,7 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name,
}
}

/* Table 33, priority 100.
/* Table 38, priority 100.
* =======================
*
* Handle output to the local logical ports in the multicast group, if
Expand All @@ -1532,7 +1532,7 @@ consider_mc_group(struct ovsdb_idl_index *sbrec_port_binding_by_name,
&match, &ofpacts, &mc->header_.uuid);
}

/* Table 32, priority 100.
/* Table 37, priority 100.
* =======================
*
* Handle output to the remote chassis in the multicast group, if
Expand Down Expand Up @@ -1674,7 +1674,7 @@ physical_run(struct physical_ctx *p_ctx,
p_ctx->chassis, flow_table, &ofpacts);
}

/* Handle output to multicast groups, in tables 32 and 33. */
/* Handle output to multicast groups, in tables 37 and 38. */
const struct sbrec_multicast_group *mc;
SBREC_MULTICAST_GROUP_TABLE_FOR_EACH (mc, p_ctx->mc_group_table) {
consider_mc_group(p_ctx->sbrec_port_binding_by_name,
Expand All @@ -1695,7 +1695,7 @@ physical_run(struct physical_ctx *p_ctx,
* encapsulations have metadata about the ingress and egress logical ports.
* VXLAN encapsulations have metadata about the egress logical port only.
* We set MFF_LOG_DATAPATH, MFF_LOG_INPORT, and MFF_LOG_OUTPORT from the
* tunnel key data where possible, then resubmit to table 33 to handle
* tunnel key data where possible, then resubmit to table 38 to handle
* packets to the local hypervisor. */
struct chassis_tunnel *tun;
HMAP_FOR_EACH (tun, hmap_node, p_ctx->chassis_tunnels) {
Expand Down Expand Up @@ -1790,46 +1790,46 @@ physical_run(struct physical_ctx *p_ctx,
}
}

/* Table 32, priority 150.
/* Table 37, priority 150.
* =======================
*
* Handles packets received from a VXLAN tunnel which get resubmitted to
* OFTABLE_LOG_INGRESS_PIPELINE due to lack of needed metadata in VXLAN,
* explicitly skip sending back out any tunnels and resubmit to table 33
* explicitly skip sending back out any tunnels and resubmit to table 38
* for local delivery.
*/
struct match match;
match_init_catchall(&match);
match_set_reg_masked(&match, MFF_LOG_FLAGS - MFF_REG0,
MLF_RCV_FROM_RAMP, MLF_RCV_FROM_RAMP);

/* Resubmit to table 33. */
/* Resubmit to table 38. */
ofpbuf_clear(&ofpacts);
put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts);
ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150, 0,
&match, &ofpacts, hc_uuid);

/* Table 32, priority 150.
/* Table 37, priority 150.
* =======================
*
* Packets that should not be sent to other hypervisors.
*/
match_init_catchall(&match);
match_set_reg_masked(&match, MFF_LOG_FLAGS - MFF_REG0,
MLF_LOCAL_ONLY, MLF_LOCAL_ONLY);
/* Resubmit to table 33. */
/* Resubmit to table 38. */
ofpbuf_clear(&ofpacts);
put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts);
ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150, 0,
&match, &ofpacts, hc_uuid);

/* Table 32, priority 150.
/* Table 37, priority 150.
* =======================
*
* Handles packets received from ports of type "localport". These ports
* are present on every hypervisor. Traffic that originates at one should
* never go over a tunnel to a remote hypervisor, so resubmit them to table
* 33 for local delivery. */
* 38 for local delivery. */
match_init_catchall(&match);
ofpbuf_clear(&ofpacts);
put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts);
Expand All @@ -1849,7 +1849,7 @@ physical_run(struct physical_ctx *p_ctx,
}
}

/* Table 32, Priority 0.
/* Table 37, Priority 0.
* =======================
*
* Resubmit packets that are not directed at tunnels or part of a
Expand All @@ -1860,11 +1860,11 @@ physical_run(struct physical_ctx *p_ctx,
ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 0, 0, &match,
&ofpacts, hc_uuid);

/* Table 34, Priority 0.
/* Table 39, Priority 0.
* =======================
*
* Resubmit packets that don't output to the ingress port (already checked
* in table 33) to the logical egress pipeline, clearing the logical
* in table 38) to the logical egress pipeline, clearing the logical
* registers (for consistent behavior with packets that get tunneled). */
match_init_catchall(&match);
ofpbuf_clear(&ofpacts);
Expand Down
70 changes: 35 additions & 35 deletions ovn-architecture.7.xml
Expand Up @@ -1224,8 +1224,8 @@
output port field, and since they do not carry a logical output port
field in the tunnel key, when a packet is received from ramp switch
VXLAN tunnel by an OVN hypervisor, the packet is resubmitted to table 8
to determine the output port(s); when the packet reaches table 32,
these packets are resubmitted to table 33 for local delivery by
to determine the output port(s); when the packet reaches table 37,
these packets are resubmitted to table 38 for local delivery by
checking a MLF_RCV_FROM_RAMP flag, which is set when the packet
arrives from a ramp tunnel.
</p>
Expand Down Expand Up @@ -1364,9 +1364,9 @@
<dl>
<dt><code>output:</code></dt>
<dd>
Implemented by resubmitting the packet to table 32. If the pipeline
Implemented by resubmitting the packet to table 37. If the pipeline
executes more than one <code>output</code> action, then each one is
separately resubmitted to table 32. This can be used to send
separately resubmitted to table 37. This can be used to send
multiple copies of the packet to multiple ports. (If the packet was
not modified between the <code>output</code> actions, and some of the
copies are destined to the same hypervisor, then using a logical
Expand Down Expand Up @@ -1430,54 +1430,54 @@

<li>
<p>
OpenFlow tables 32 through 47 implement the <code>output</code> action
in the logical ingress pipeline. Specifically, table 32 handles
packets to remote hypervisors, table 33 handles packets to the local
hypervisor, and table 34 checks whether packets whose logical ingress
OpenFlow tables 37 through 39 implement the <code>output</code> action
in the logical ingress pipeline. Specifically, table 37 handles
packets to remote hypervisors, table 38 handles packets to the local
hypervisor, and table 39 checks whether packets whose logical ingress
and egress port are the same should be discarded.
</p>

<p>
Logical patch ports are a special case. Logical patch ports do not
have a physical location and effectively reside on every hypervisor.
Thus, flow table 33, for output to ports on the local hypervisor,
Thus, flow table 38, for output to ports on the local hypervisor,
naturally implements output to unicast logical patch ports too.
However, applying the same logic to a logical patch port that is part
of a logical multicast group yields packet duplication, because each
hypervisor that contains a logical port in the multicast group will
also output the packet to the logical patch port. Thus, multicast
groups implement output to logical patch ports in table 32.
groups implement output to logical patch ports in table 37.
</p>

<p>
Each flow in table 32 matches on a logical output port for unicast or
Each flow in table 37 matches on a logical output port for unicast or
multicast logical ports that include a logical port on a remote
hypervisor. Each flow's actions implement sending a packet to the port
it matches. For unicast logical output ports on remote hypervisors,
the actions set the tunnel key to the correct value, then send the
packet on the tunnel port to the correct hypervisor. (When the remote
hypervisor receives the packet, table 0 there will recognize it as a
tunneled packet and pass it along to table 33.) For multicast logical
tunneled packet and pass it along to table 38.) For multicast logical
output ports, the actions send one copy of the packet to each remote
hypervisor, in the same way as for unicast destinations. If a
multicast group includes a logical port or ports on the local
hypervisor, then its actions also resubmit to table 33. Table 32 also
hypervisor, then its actions also resubmit to table 38. Table 37 also
includes:
</p>

<ul>
<li>
A higher-priority rule to match packets received from ramp switch
tunnels, based on flag MLF_RCV_FROM_RAMP, and resubmit these packets
to table 33 for local delivery. Packets received from ramp switch
to table 38 for local delivery. Packets received from ramp switch
tunnels reach here because of a lack of logical output port field in
the tunnel key and thus these packets needed to be submitted to table
8 to determine the output port.
</li>
<li>
A higher-priority rule to match packets received from ports of type
<code>localport</code>, based on the logical input port, and resubmit
these packets to table 33 for local delivery. Ports of type
these packets to table 38 for local delivery. Ports of type
<code>localport</code> exist on every hypervisor and by definition
their traffic should never go out through a tunnel.
</li>
Expand All @@ -1492,32 +1492,32 @@
packets, the packets only need to be delivered to local ports.
</li>
<li>
A fallback flow that resubmits to table 33 if there is no other
A fallback flow that resubmits to table 38 if there is no other
match.
</li>
</ul>

<p>
Flows in table 33 resemble those in table 32 but for logical ports that
Flows in table 38 resemble those in table 37 but for logical ports that
reside locally rather than remotely. For unicast logical output ports
on the local hypervisor, the actions just resubmit to table 34. For
on the local hypervisor, the actions just resubmit to table 39. For
multicast output ports that include one or more logical ports on the
local hypervisor, for each such logical port <var>P</var>, the actions
change the logical output port to <var>P</var>, then resubmit to table
34.
39.
</p>

<p>
A special case is that when a localnet port exists on the datapath,
remote port is connected by switching to the localnet port. In this
case, instead of adding a flow in table 32 to reach the remote port, a
flow is added in table 33 to switch the logical outport to the localnet
port, and resubmit to table 33 as if it were unicasted to a logical
case, instead of adding a flow in table 37 to reach the remote port, a
flow is added in table 38 to switch the logical outport to the localnet
port, and resubmit to table 38 as if it were unicasted to a logical
port on the local hypervisor.
</p>

<p>
Table 34 matches and drops packets for which the logical input and
Table 39 matches and drops packets for which the logical input and
output ports are the same and the MLF_ALLOW_LOOPBACK flag is not
set. It also drops MLF_LOCAL_ONLY packets directed to a localnet port.
It resubmits other packets to table 40.
Expand Down Expand Up @@ -1545,7 +1545,7 @@
<li>
<p>
Table 64 bypasses OpenFlow loopback when MLF_ALLOW_LOOPBACK is set.
Logical loopback was handled in table 34, but OpenFlow by default also
Logical loopback was handled in table 39, but OpenFlow by default also
prevents loopback to the OpenFlow ingress port. Thus, when
MLF_ALLOW_LOOPBACK is set, OpenFlow table 64 saves the OpenFlow ingress
port, sets it to zero, resubmits to table 65 for logical-to-physical
Expand Down Expand Up @@ -1583,8 +1583,8 @@
traverse tables 0 to 65 as described in the previous section
<code>Architectural Physical Life Cycle of a Packet</code>, using the
logical datapath representing the logical switch that the sender is
attached to. At table 32, the packet will use the fallback flow that
resubmits locally to table 33 on the same hypervisor. In this case,
attached to. At table 37, the packet will use the fallback flow that
resubmits locally to table 38 on the same hypervisor. In this case,
all of the processing from table 0 to table 65 occurs on the hypervisor
where the sender resides.
</p>
Expand Down Expand Up @@ -1615,7 +1615,7 @@
<p>
The packet traverses tables 8 to 65 a third and final time. If the
destination VM or container resides on a remote hypervisor, then table
32 will send the packet on a tunnel port from the sender's hypervisor
37 will send the packet on a tunnel port from the sender's hypervisor
to the remote hypervisor. Finally table 65 will output the packet
directly to the destination VM or container.
</p>
Expand All @@ -1642,9 +1642,9 @@
When a hypervisor processes a packet on a logical datapath
representing a logical switch, and the logical egress port is a
<code>l3gateway</code> port representing connectivity to a gateway
router, the packet will match a flow in table 32 that sends the
router, the packet will match a flow in table 37 that sends the
packet on a tunnel port to the chassis where the gateway router
resides. This processing in table 32 is done in the same manner as
resides. This processing in table 37 is done in the same manner as
for VIFs.
</p>

Expand Down Expand Up @@ -1737,21 +1737,21 @@
chassis, one additional mechanism is required. When a packet
leaves the ingress pipeline and the logical egress port is the
distributed gateway port, one of two different sets of actions is
required at table 32:
required at table 37:
</p>

<ul>
<li>
If the packet can be handled locally on the sender's hypervisor
(e.g. one-to-one NAT traffic), then the packet should just be
resubmitted locally to table 33, in the normal manner for
resubmitted locally to table 38, in the normal manner for
distributed logical patch ports.
</li>

<li>
However, if the packet needs to be handled on the chassis
associated with the distributed gateway port (e.g. one-to-many
SNAT traffic or non-NAT traffic), then table 32 must send the
SNAT traffic or non-NAT traffic), then table 37 must send the
packet on a tunnel port to that chassis.
</li>
</ul>
Expand All @@ -1763,11 +1763,11 @@
egress port to the type <code>chassisredirect</code> logical port is
simply a way to indicate that although the packet is destined for
the distributed gateway port, it needs to be redirected to a
different chassis. At table 32, packets with this logical egress
port are sent to a specific chassis, in the same way that table 32
different chassis. At table 37, packets with this logical egress
port are sent to a specific chassis, in the same way that table 37
directs packets whose logical egress port is a VIF or a type
<code>l3gateway</code> port to different chassis. Once the packet
arrives at that chassis, table 33 resets the logical egress port to
arrives at that chassis, table 38 resets the logical egress port to
the value representing the distributed gateway port. For each
distributed gateway port, there is one type
<code>chassisredirect</code> port, in addition to the distributed
Expand Down

0 comments on commit 2caf5a2

Please sign in to comment.