Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gnrc_ipv6_ext_frag: fix release on rbuf creation for n-th fragment [backport 2019.10] #12434

Conversation

miri64
Copy link
Member

@miri64 miri64 commented Oct 13, 2019

Backport of #12414

Contribution description

While testing #12375, @kb2ma noticed that in some rare cases, one of the nodes crashes. He informed me offline and I was able to reproduce that. After some more runs, I was able to pinpoint it to the following corner case:

  1. a subsequent fragment is received before the first and
  2. the reassembly buffer is currently filled up when another fragment of a different datagram arrives and thus needs to be cached out to make room for the new reassembly.

Due to that I added a test case to the unittests of IPv6 reassembly (feee568).

In the end it turned out that when a subsequent fragment is received packet parts that are stored in the reassembly buffer are freed from the packet buffer on reassembly buffer creation. This in turn leads that freed (but still referenced by the reassembly buffer) region to be reused for another packet coming in, e.g. another fragment, that then caches the other reassembly buffer entry out, since the reassembly buffer is full, leading to parts of its data to be released by the reassembly buffer, causing a crash, as the data is now not valid anymore.

Testing procedure

With a board of your choice (preferably native, as this test due to an ethos bug [#12264] skips some tests for non-native boards):

make -C tests/gnrc_ipv6_ext_frag flash
sudo make -C tests/gnrc_ipv6_ext test

if feee568 is reverted, the test fails.

Issues/PRs references

Follow-up on #12375.

Adds a test case for when the following conditions cause a crash:

- a subsequent fragment is received before the first
- the reassembly buffer is currently filled up when another fragment of
  a different datagram arrives and thus needs to be cached out to make
  room for the new reassembly

(cherry picked from commit 07a6b54)
The IPv6 (extension) headers of the first fragment received are re-used
for the reassembled packet, so when receiving a subsequent packet we
need to distinguish, if we just want to release the payload or all of
the packet after the packet data was added to the reassembly buffer.

(cherry picked from commit 095e966)
@miri64 miri64 added Area: network Area: Networking CI: ready for build If set, CI server will compile all applications for all available boards for the labeled PR Process: release backport Integration Process: The PR is a release backport of a change previously provided to master Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors) labels Oct 13, 2019
@miri64 miri64 requested review from benpicco and kb2ma October 13, 2019 13:16
@miri64 miri64 added this to the Release 2019.10 milestone Oct 13, 2019
Copy link
Contributor

@aabadie aabadie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

git diff-range reports no difference between this PR and the original #12414.

ACK

@aabadie aabadie merged commit f4dd4ce into RIOT-OS:2019.10-branch Oct 14, 2019
@miri64 miri64 deleted the backport/2019.10/gnrc_ipv6_ext_frag/fix/n-th-with-full-rbuf branch January 17, 2020 12:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Area: network Area: Networking CI: ready for build If set, CI server will compile all applications for all available boards for the labeled PR Process: release backport Integration Process: The PR is a release backport of a change previously provided to master Type: bug The issue reports a bug / The PR fixes a bug (including spelling errors)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants