Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

R2.0 #80

Open
wants to merge 35 commits into
base: R2.1
Choose a base branch
from
Open

R2.0 #80

wants to merge 35 commits into from

Conversation

liuran2011
Copy link

No description provided.

vmahuli and others added 30 commits November 17, 2014 17:33
Change-Id: I1c3c2ac99fd15f77c93fddfced110ca2f2b81cde
The function that pulls outer headers assumes that the packet data
points at layer 2 header. While this is true in case of ubuntu, the
commit that made this assumption did not reposition the call for
Centos, resulting in pull failures.

Change-Id: Iaae280817820feb87a525db8c028bcba4e905bb1
Closes-BUG: #1393730
Change-Id: Ic8c84d076d078a87adaf6d4398373c368db0361c
Closes-bug: #1394147
(cherry picked from commit 9f7b8ca)
layer 2 information needs to be added.

Layer 3 unicast nexthops is shared across ipv4 and ipv6. The layer 2
protocol field of the rewrite information is updated every time a packet hits
the nexthop, based on whether the packet is ipv4 or ipv6. There are a couple
of issues with this way of doing things. One is that it is not MP safe and
other is that the protocol field should be updated only if there is any rewrite
information to be added.

There will not be any rewrite information for packets egressing a tunnel interface,
and hence the layer 2 protocol information should not be added.

Closes-BUG: #1394461
Change-Id: If06c41127501ed1f5971228269cfbc8a533518c6
    In linux_pull_outer_headers(), ICMP header size was counted twice before skb_pull. this resulted in skb_pull
    failing for the neighbor solicit packets. This bug fixes the same.
    Double committing the fix that's already checked in to the mainline

    Closes-Bug: 1392638

Change-Id: Iaa51b6973aac6a11b89e544151fbf41ac3d3109d
… only if layer 2 information needs to be added." into R2.0
…roducing L2_PAYLOAD flag to identify that payload carried after label is L2 payload. The L2 packets also get subjected to DF bit and MSS adjust processing

Change-Id: I5a30a2b8a7900b5d271eb6de38cbddc2b3d11a48
…est hitting the L3 subnet route

Conflicts:

	dp-core/vr_datapath.c
	dp-core/vr_ip_mtrie.c
	dp-core/vr_nexthop.c
	dp-core/vr_route.c
	include/vr_datapath.h
	include/vr_nexthop.h
	include/vr_packet.h
	utils/rt.c
	utils/vrfstats.c

Change-Id: I378afbd5635ec72cc5da47c07568c85b9208ff66
…ddition

If the napi structure was not initialised during interface addition, because
of errors,  it should not be touched during the delete too, since doing a
netif_napi_del results in crash. However, there are no reliable checks to find
whether napi was initialised or not. Hence, for now we will check for the existance
poll function, which should have been set if the napi structure was initialised.

Closes BUG: #1399577

Change-Id: I8cf439dc53805801a5ba301f542dedb2aaa5dee2
The main problem with caching more than one packet is the race between
forwarding paths adding new packets and agent flushing them in a different
core. One way this can be solved is by deferring the flushing till all
cores have seen the change of the flow state. However, this adds to latency.
Adding a spinlock (the only lock that is possible) is not preferrable
since it is possible that one of the forwarding thread can be looping on
the lock availability.

The way we fix this is by doing a bit of compromise. We run a two stage
flush. In the first stage, which happens when agent has changed the state,
we take all packets that were queued before agent changed the state (
thus solving the latency) and in the second stage that runs in a defer, we
take care of all those packets that were queued at the same moment the flow
entry was changing state, but did not see the change. It is possible that
we will see some latency for these packets and a possible reorder, but till
we observe and conclude that this latency indeed occurs and we can't live with
the anomaly, this solution seems to be the best.

This change also simplifies flow queue alloction and free by making the queue
an array instead of a list.

Many of the files that show up in the commit have a header file inclusion
since we had to remove one inclusion from vrouter.h.

Closes Bug:#1387710

Change-Id: I4d08fc96d6d154f1bcdace1860c51982a324cdd9
During flow queue flush, if an error occurs, pnode's packet pointer was
not cleaned up. Because of the multi stage flush logic we have now, a
subsequent flush will find this packet and try to flush it again, resulting
in potential double frees.

Closes BUG: #1401204

Change-Id: I848bbd62a53c129254eae65ab226cfb4e5baf2ff
If the memory used to store prefixes is not set to zero, non /32
(/full address length, 128 in case of ipv6) routes can have junk
values.

Change-Id: Id2dfe4cff3abf063acc820195c1e1f9fffacf1b6
Closes BUG: #1403746
- Allow vrouter to coexist with Linux bridge on Centos 6.5.

Change-Id: I268760aecfa47e3baab48149cea3f5c82a92a12f
Closes-Bug:#1423061
…e entry

HOLD is an action that is allowed to be set in a flow entry. Agent can set HOLD
as action and expect the first packet to be trapped. To hold packets in an
entry there needs to be hold list (an array, rather). While this hold list is
allocated when a new flow is created by kernel, the hold list is freed once all
the cached packets are flushed. A subsequent 'HOLD' set needs to have the hold
list allocated so that packets can be cached.

Closes BUG: #1425992

Change-Id: Ic32a03f402278a351c72cb6a4f72bafdaad2149c
Change-Id: I18af4b399937faa331af2e396e7e62dc69b937e4
…ain for dump if respnses are still pending

Change-Id: Ide100425b8e895cfe3bab1d3d16391e4865bfa3e
Flow metadata, which is passed to the hold queue flusher, is not freed
post the flush.

Change-Id: I7764f39167403532fe01caa2b1285a183420893b
Closes-Bug: #1436798
…ad of

dropping them in vrouter.

Closes-Bug:#1438408

Change-Id: I7bcc1b24f82b9a5754e708c8586cc0273e110d21
It is logically possible that agent and datapath are trying to create
same flow simultaneously. If it so happens that agent gets the entry
that datapath created and tries to update that entry assuming that the
entry was created by it, then the hold count will never be compensated
by a corresponding acted count, and hence vrouter's perception of the
number of active hold entries can go wrong. To fix this, return error
to agent if the flow it tried to create already existed.

Other fixes:

. If agent is changing the flow state to 'hold' from any other state,
update the hold count entry.

. Export the hold count statistics to 'flow' utility

Change-Id: I24087baa5bf853b863f34e1b55882927d9114349
Partial-BUG: #1439069
Wrong pointer math made ipv6 route table base to overlap with ipv4 route
table, resulting in unpredictable tables and memory corruption.

Change-Id: Ia31a555cf3abb108af31c1ee74c4cd7384570de6
Closes-BUG: #1439654
Closes-BUG: #1444953

Change-Id: Ibf124cf46cf4b07b073494707ee4d0c63da2bed3
To make sure that we flush all the packets that are queued in a flow
entry, we run a defer function. If for any reason this defer was not
scheduled (because the function was called with no hold queue), the
defer data has to be freed.

Closes-BUG: #1436798
(cherry picked from commit 8c30ce9)

Fix improper boundary checks and reference count leaks

Boundary checks allow for one extra label than the maximum, causing
memory corruption. Also, when a label is changed, reference to old
nexthop has to be released. Two harmless boundary checks in nexthop
subysystem is also addressed.

Closes-BUG: #1446550
Change-Id: I9289265b8a843160fdfe6fffc3e52c131d9b2a4a
Start accepting the new mplsover udp destination ports
allotted by IANA. We continue to accept the old and new
destination ports and continue to use range of source ports

closes-bug: #1420900

Change-Id: I9bff6fdafbe7e242e0a7aef582a856777879cc17
vRouter keeps a per-cpu flow hold count statistic. This statistic is
exported to user space processes to aid in debugging. While this
statistic is maintained for all the cpus, vRouter exports statistics
only for the first 64 cpus. The main reason why we limit the export to
only 64 cpus is that the messaging infrastructure within vRouter has a
notion of how much to allocate for each message based on the structure
name. This calculation is not dynamic since for most structures the
calculation remains the same. For flow, we allocate only for 64 cpus.

While making this calculation dynamic is a larger effort, for now we
will extend the memory allocated to accommodate 128 cpus (which is
reasonable). Also, split the regular flow request and the flow table
information requests so that we allocate only what we need.

Part of this commit also fixes the problem where vRouter was setting
the sandesh list size to the size of the memory rather than the number
of elements in the list, which resulted in sandesh encode failures in
the case of a large cpu count.

Change-Id: I3be31c10c86f52457199e5015d85ac2c7d76f5cf
Closes-BUG: #1458795
When packets arrive from a vm/fabric, we try to pull all data till
the first 8 bytes of a transport header into the first buffer so
that linear access to data is possible (keys to flow is what we
look for in the transport header). We do this operation
without checking whether the packet is a fragment or not and such
an unconditional attempt at pull can result in pull failures for
fragments whose data length is less that 8.

Hence, pull only for packets that have a valid transport header and
that has a trapsort protocol we recognize.

Change-Id: Iaf8ec480bef045c774630a7c0cc9afbc867a6062
Closes-BUG: #1460218
While doing the module cleanup, incase of module init, the stats
memory needs to be cleaned up only it is already allocated. Accessing it
without validating it is allocated or not, leads to crash.

closes-bug: #1475558

Change-Id: Iaf0d67014174506d51bd9a46671e64d463d0db71
divakardhar and others added 5 commits September 22, 2015 22:18
As of now Vrouter uses link local port range from 32768 till 65535. This
port range is default port range in Linux for ephemeral ports. This
range can be modified using sysctl. If modified using sysctl to a
different range, Vrouter still uses the old range and this leads to
failure of link local service if port allocated by Agent is out side the
above default range.

As a fix, complete port range of 0 till 65535 is used

Change-Id: I72a708b288cc6cb36bf3097ab87c11aebe71ca59
closes-bug: #1492408
* SET_ETHTOOL_OPS macro is removed in 3.16 kernels
* rxhash of sk_buff has been renamed to hash in 3.15 kernels.

This patch fixes this issue.

Change-Id: Ic0de873d0a5d869624ad9d5b883586222e8119cc
Closes-bug: #1383647
Change-Id: Iafc9f44956f5712b73bf690293ae7806a36780f1
Closes-Bug: 1516103
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants