You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Address proper packet fragmentation across multiple NET_BUFFERs in a NET_BUFFER_LIST.
Additional relevant details from the thread:
We have 3 special cases:
a) We've got one NET_BUFFER_LIST, that contains one NET_BUFFER - best case for us, nothing special to do.
b) We've got a TCP or UDP packet split in multiple NET_BUFFER-s, in a NET_BUFFER_LIST.
This is not a very good case for us, because the packet fields that are guaranteed to be the same for all NET_BUFFERs are:
eth, net layer info (transport protocol only), transport layer (source and dest port only), ipv4 info (source and dest ip) and arp info, and ipv6 info.
Whereas the fields that we have NO guarantee that are the same among NET_BUFFER-s, are:
net layer: TOS, TTL, fragment type
transport layer: tcp flags
ipv6 info: flow label.
c) We have something else than TCP / UDP packet (e.g. icmp4, icmp6, etc.):
NONE of these fields are guaranteed to be the same for each NET_BUFFER in the NET_BUFFER_LIST:
net layer: TOS, TTL, fragment type (don't know about mpls, where it fits)
transpot layer: icmp type, icmp code (icmp4 and icmp6)
ipv4: src and dest address (they are the same only for UDP and TCP)
arp: perhaps the same problem as above, for ip address
ipv6: src address, dest address, flow label
It can also happen (i.e. we have no guarantee otherwise) that we get a NET_BUFFER_LIST with 2 NET_BUFFERs: first is some icmp6, the second contains an ICMP6 Neighbor Discovery. By the current implementation of OvsExtractFlow , we would fail matching a flow for the Neighbor Discovery packet - instead, the flow for the first icmp6 packet would be used for both packets.
The text was updated successfully, but these errors were encountered:
Added support for creating and handling multiple NBLs with only one NB
for ingress data path.
Signed-off-by: Sorin Vinturis <svinturis at cloudbasesolutions.com>
Reported-by: Alessandro Pilotti <apilotti at cloudbasesolutions.com>
Reported-at: openvswitch/ovs-issues#2
Acked-by: Nithin Raju <nithin@vmware.com>
Signed-off-by: Ben Pfaff <blp@nicira.com>
See ML thread: http://openvswitch.org/pipermail/dev/2014-August/043403.html
Address proper packet fragmentation across multiple NET_BUFFERs in a NET_BUFFER_LIST.
Additional relevant details from the thread:
We have 3 special cases:
a) We've got one NET_BUFFER_LIST, that contains one NET_BUFFER - best case for us, nothing special to do.
b) We've got a TCP or UDP packet split in multiple NET_BUFFER-s, in a NET_BUFFER_LIST.
This is not a very good case for us, because the packet fields that are guaranteed to be the same for all NET_BUFFERs are:
eth, net layer info (transport protocol only), transport layer (source and dest port only), ipv4 info (source and dest ip) and arp info, and ipv6 info.
Whereas the fields that we have NO guarantee that are the same among NET_BUFFER-s, are:
net layer: TOS, TTL, fragment type
transport layer: tcp flags
ipv6 info: flow label.
c) We have something else than TCP / UDP packet (e.g. icmp4, icmp6, etc.):
NONE of these fields are guaranteed to be the same for each NET_BUFFER in the NET_BUFFER_LIST:
net layer: TOS, TTL, fragment type (don't know about mpls, where it fits)
transpot layer: icmp type, icmp code (icmp4 and icmp6)
ipv4: src and dest address (they are the same only for UDP and TCP)
arp: perhaps the same problem as above, for ip address
ipv6: src address, dest address, flow label
It can also happen (i.e. we have no guarantee otherwise) that we get a NET_BUFFER_LIST with 2 NET_BUFFERs: first is some icmp6, the second contains an ICMP6 Neighbor Discovery. By the current implementation of OvsExtractFlow , we would fail matching a flow for the Neighbor Discovery packet - instead, the flow for the first icmp6 packet would be used for both packets.
The text was updated successfully, but these errors were encountered: