Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a NET_BUFFER_LIST for each existing NET_BUFFER #2

Closed
alexpilotti opened this issue Aug 2, 2014 · 1 comment
Closed

Create a NET_BUFFER_LIST for each existing NET_BUFFER #2

alexpilotti opened this issue Aug 2, 2014 · 1 comment
Assignees
Labels
Milestone

Comments

@alexpilotti
Copy link

See ML thread: http://openvswitch.org/pipermail/dev/2014-August/043403.html

Address proper packet fragmentation across multiple NET_BUFFERs in a NET_BUFFER_LIST.

Additional relevant details from the thread:

We have 3 special cases:

a) We've got one NET_BUFFER_LIST, that contains one NET_BUFFER - best case for us, nothing special to do.

b) We've got a TCP or UDP packet split in multiple NET_BUFFER-s, in a NET_BUFFER_LIST.
This is not a very good case for us, because the packet fields that are guaranteed to be the same for all NET_BUFFERs are:
eth, net layer info (transport protocol only), transport layer (source and dest port only), ipv4 info (source and dest ip) and arp info, and ipv6 info.

Whereas the fields that we have NO guarantee that are the same among NET_BUFFER-s, are:
net layer: TOS, TTL, fragment type
transport layer: tcp flags
ipv6 info: flow label.

c) We have something else than TCP / UDP packet (e.g. icmp4, icmp6, etc.):
NONE of these fields are guaranteed to be the same for each NET_BUFFER in the NET_BUFFER_LIST:
net layer: TOS, TTL, fragment type (don't know about mpls, where it fits)
transpot layer: icmp type, icmp code (icmp4 and icmp6)
ipv4: src and dest address (they are the same only for UDP and TCP)
arp: perhaps the same problem as above, for ip address
ipv6: src address, dest address, flow label
It can also happen (i.e. we have no guarantee otherwise) that we get a NET_BUFFER_LIST with 2 NET_BUFFERs: first is some icmp6, the second contains an ICMP6 Neighbor Discovery. By the current implementation of OvsExtractFlow , we would fail matching a flow for the Neighbor Discovery packet - instead, the flow for the first icmp6 packet would be used for both packets.

@alexpilotti alexpilotti added the bug label Aug 2, 2014
@alexpilotti alexpilotti added this to the P0 milestone Aug 5, 2014
@Samuel-Ghinet Samuel-Ghinet self-assigned this Aug 5, 2014
@svinturis svinturis assigned svinturis and unassigned Samuel-Ghinet Apr 7, 2015
blp pushed a commit to openvswitch/ovs that referenced this issue May 28, 2015
Added support for creating and handling multiple NBLs with only one NB
for ingress data path.

Signed-off-by: Sorin Vinturis <svinturis at cloudbasesolutions.com>
Reported-by: Alessandro Pilotti <apilotti at cloudbasesolutions.com>
Reported-at: openvswitch/ovs-issues#2
Acked-by: Nithin Raju <nithin@vmware.com>
Signed-off-by: Ben Pfaff <blp@nicira.com>
@svinturis
Copy link

Issue addressed by the above patch. Closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants