Memory leak in jool_siit #166

Closed
toreanderson opened this Issue Aug 13, 2015 · 10 comments

Projects

None yet

2 participants

@toreanderson
Contributor

I've noticed that there is a memory leak in Jool (at least in stateless mode). See the below graph:

memory-day

From the start of the graph until approx. 13:15 the machine was just sitting there mostly idling. It was however receiving standard internet background radiation for the IPv4 pool routed to it (a /25), most of which would be looping through Jool multiple times until TTL/HLIM exceeded because most of those IPv4 addresses had no associated EAMs. So there would be a steady stream of packets undergoing translation.

However, approx. 13:15 I added an EAM pointing to a speedtest site and started downloading a big file over and over again. As you can see the memory usage skyrocketed extremely fast, and this caused userspace to run into trouble, fork() started failing because of failures to allocate memory and such.

I'll be happy to provide access to the test machine where this can be reproduced at will, if you're interested in that send me your ssh pubkey in e-mail or something and I'll set up a user og look me up on Jabber.

@ydahhrk
Member
ydahhrk commented Aug 13, 2015

Guess we're releasing 3.3.3 early. It's one of those days; Jabber isn't working for some reason.

I'll send the key shortly. I'm not sure if I'm going to be able to do much since I think a natural starting point would be /proc/slabinfo (all dynamic memory used by Jool is slab cached AFAIK), but it needs to be cat'd with privileges...

You can also or alternatively send me a "before" and "after" version of that file if you want.

Has it always been like this?
(are you using an official version or some development branch?)

@ydahhrk ydahhrk added this to the 3.3.3 milestone Aug 13, 2015
@toreanderson
Contributor

This is Jool 3.3.2, sorry I forgot to mention it. It has happened two times that the dev machine has gone down because of memory exhaustion, the first time I suspected it had to do with Jool but had not set up the graphs to confirm,and the second time the spike at the end correlated with me sending lots of network traffic through it. Unfortunately however, after rebooting it it seems fine again - I can send lots of traffic through it and there is no apparent leakage. So I suspect I must there must be some kind of trigger that causes it to start leaking. I've been developing Puppet manifests for Jool on this machine so I've been fiddling around with settings and testing various stuff, so there must be something there, somewhere...I'll let you know if I find out anything more.

@ydahhrk ydahhrk added a commit that referenced this issue Aug 14, 2015
@ydahhrk ydahhrk Issue #166.
Recently allocated packets were being forgotten in certain translation failure conditions.
16c16b5
@ydahhrk
Member
ydahhrk commented Aug 14, 2015

I found a significant memory leak.

It only triggered in certain failures, address translation failures amongst them, which is in sync with your background radiation hogging up memory. It doesn't, on the other hand, explain your massive file transfer spike (I'm assuming it was a successful download).

I'll keep looking. Try using that commit and we'll release next week if everything looks tidy.

@toreanderson
Contributor

16c16b5 definitively improved matters. See the graph below. The machine was started up yesterday at 14:00 (automatically starting up Jool v3.3.2). At 20:30 I upgraded to 16c16b5 and restarted Jool (including unloading and then reloading jool_siit.ko). At 02:00 tonight the machine was rebooted. So 16c16b5 successfully plugs the slow and constant leak I was seeing.

jool-mem

I still haven't been able to reproduce the fast leaking that occurred during the large file download. I will revert my test machine to v3.3.2 and see if I can find out how to (reliably) reproduce it. If I do, I'll check if it occurs with 16c16b5 too. I'll keep you posted of any findings.

@toreanderson
Contributor

Just for reference, here are zoomed-in graphs of memory usage and network traffic during the fast leak mentioned in my initial comment. They show exactly the same time span, so they align vertically.

memory-pinpoint 1439463653 1439465969
if_eth0-pinpoint 1439463653 1439465969

A couple of observations:

  1. It was leaking significantly slower than the amount of data transferred. It appears it was leaking ~600MiB of memory per 5 minutes, while the throughput was ~700Mbps (or ~26250MiB per 5 minutes).
  2. The Jool node was transmitting 8-9% more bits to the network than it was receiving from from the network. I have no good explanation why - I would have expected it to be the other way around, and with a much smaller difference. (If I re-do the test, performing large downloads from an IPv6 site, the amount of data sent is indeed ~0.5% less then the amount received.)
@toreanderson
Contributor

I'm now quite certain I can explain what happened during the large file download:

First, I noticed that locally generated traffic from the Jool node was causing memory to leak. So I could easily cause fast leaking by running e.g. ssh joolnode cat /dev/zero > /dev/null from some other node in my network.

I also found evidence in my network graphs of a ~70Mbps data stream going from the data centre where the Jool node is located towards our office networks at the exact same time as the large file download was taking place. This explains the discrepancy I noted in my previous comment. In all likelihood, the case was a tcpdump process running in an xterm I had forgotten about that was displaying all translated traffic. So once the download was started, tcpdump started spewing out data to stdout, causing locally generated SSH traffic that in turn caused memory to leak.

The good news is that I can easily reproduce this when running v3.3.2. Running 16c16b5, I can't. So the fix seems good. Sorry if I led you on a wild goose chase believing that 16c16b5 was an incomplete fix.

@ydahhrk
Member
ydahhrk commented Aug 17, 2015

:)

Thanks for your hard work!

Why is it leaking locally generated traffic, though? It shouldn't intercept it in the first place.

BRB, going to run some experiments.

@ydahhrk
Member
ydahhrk commented Aug 17, 2015

Oh, I get it. It's the TCP ACKs. I should have seen it coming.

Jool node sends packets, which are neither intercepted nor leaked. Target answers ACKs, Jool tries to translate, fails, leaks. ACK is returned to the kernel, so it's otherwise handled normally.

OK, going to start the release recipe right now.

@toreanderson
Contributor

Ack. It "fails" to translate because the destination address of the ACKs a re in the implicit blacklist, correct?

This should actually be considered a security vulnerability, because the fast leaking and inevitable OOM crash can be trivially remotely triggered by anyone who simply floods a bunch of packets to a locally assigned address on the Jool node (which is easy to find in a traceroute). I exhausted my test node with Jool v3.3.2 of memory in just a few minutes from simply running ping -f joolnode from a couple of external nodes.

@ydahhrk
Member
ydahhrk commented Aug 17, 2015

Ack. It "fails" to translate because the destination address of the ACKs a re in the implicit blacklist, correct?

Correct.

Merged, released, closing.

@ydahhrk ydahhrk closed this Aug 17, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment