Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
TCP RST handling for FIN RCV #212
during some tests I've noticed that many TCP connections move to the
As far as I can see from some tcpdump captures TCP RSTs are involved in all these connections, for example RST sent in response to FIN with no more packets to follow.
I know RFC6146 section 126.96.36.199 states that RSTs should be handled only for connections in
What I mean has already been discussed on BEHAVE:
Of course RST handling needs a special attention about sequencing to mitigate attacks, and this introduces another topic: firewalling/filtering on a NAT64 box. Many considerations have already been written about this too:
I don't know if internally Jool already implements sequencing checks, so if you think my proposal could be taken into account I would also ask
Thank you for considering my request.
Jool does not currently track TCP seqs at all (not even the upcoming 3.5 release), but it will have to be implemented eventually if we want to support FTP.
I don't remember the details from when I started trying to fix FTP, but I do recall thinking it was going to be a lot of work. In fact, it might be effort better spent merging Jool with the kernel, because that should enable us access to iptables's seq tracking code. (And also the FTP NAT44 ALG.)
Before I say anything else, I'd like to point out that Dmitry Anipko did not suggest the state should be moved to
The distinction is important because the
Edit: Oops. I failed you realize you were linking to Murari Sridharan's message, not to Dmitry's. I do not understand why Murari changed the idea, but see below.
It won't work as advertised 100% of the time:
Let's say A and B are exchanging packets. N is in the
We have a problem now; N is in
Dmitry's solution doesn't have this problem as far as I can tell. Its only drawback is that the RST is going to break the connection if the endpoints do not exchange data during the TCP_TRANS period.
(But if my home's ISP is of any indication, this is normal even without NAT64s in the way :P. Then again, it's pretty brittle in general.)
I really don't have any objections to coding this, but it would not be enabled by default.
Noooooope :). Seqnum tracking is a lot of code for just this purpose, and since the filtering topic raised a lot of controversy I'd say it's fairly safe to assume the IETF rejected the idea during one of the meetings. (I cannot say the same for Dmitry's proposal however, as it kind of looks like they forgot about it.)
But if we end up implementing seq checking anyway due to FTP, then improving RST analysis immediately after would be rather inevitable.
I'm in for implementing Dmitry's proposal, which is not about moving to
Filtering bad RSTs may eventually happen, but I'd say it's unlikely in the near future. (unless someone can propose some clever code)
Thanks @ydahhrk for your detailed reply.
Well it would be perfect, actually I don't know how I ended up focusing on
I totally agree with your conclusions, I'm ready to test the new code as soon as you release it.
Do you plan to introduce a new (non-RFC6146) state to also handle the case data keep flowing after the RST? I mean something that allows to revert the timer to
See, that's the beauty of Dmitry's solution: I don't have to. Aside from the RST special handling, The
These are the two possible outcomes (that I know of) that involve a RST during one of the
1: B is legitimately trying to end the connection using an RST
Background: N is in
Well, M can keep the session alive by sending more forged packets, but the state machine was already vulnerable to this anyway.
2: M is trying to tamper the connection by sneaking an RST in
Same background as 1.
We're now back to where we were before the RST; M's attack did nothing.
Assuming, that is, that B has something to send within 4 minutes. But if you're seeing lots of clients thinking they're being clever by ending with RST instead of FIN, breaking the ones that fulfill all of the following requirements to achieve better resource utilization is probably a reasonable tradeoff:
(It can't be the default, though, because it encourages sloppy TCP stacks.)
I'll get to coding.