New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues to consider (Handley) #147

csperkins opened this Issue Mar 21, 2018 · 2 comments


None yet
3 participants

csperkins commented Mar 21, 2018

Input from Mark Handley, distributed with permission:

Hi Aaron, Colin,

As I mentioned, I'm playing with ideas for a new TCP-replacement transport protocol at the moment. It doesn't really have a name right now, but lets call it NeoTCP. Likely it won't go anywhere, but some of the ideas differ a bit from other transport protocols, so may serve as a useful test for the TAPS API.

  1. Pre-authentication. I want to put the ssh server on my home machine on the Internet without it being constantly attacked with password guessing attacks. NeoTCP implements very simple pre-authentication, where my laptop and my home machine share a secret. When connecting, the client sends the pair (nonce, Hash(nonce, secret)). The server won't send a syn/ack equivalent unless the secret match one of the secrets from its cache.

  2. Source-spoofing protection. NeoTCP supports the equivalent of SYN cookies, but with a 4-way handshake to avoid the potential deadlock associated with SYN cookies. I'm also playing with the idea of allowing middleboxes to interpose a challenge in response to the SYN, and when the challenge response arrives from the client, only then is the packet passed through to the actual server, and the handshake continues. I'm not clear yet if including middleboxes in some way impacts the API.

  3. Encryption. Goal is to do tcpcrypt-like encryption, but provide hooks to higher layers so they can do whatever full authentication that particular application needs.

  4. Redirect. Redirecting connections is a very common application-layer function, but really you would prefer to redirect before you've set up a connection. Such redirection obviously needs some form of authentication (maybe this contradicts 3 - not clear where I'm going on this yet). What would the API be for a redirect server? It's not a full server; it listens, but simply sends stateless syn/ack redirects, rather than accept connections.

  5. Acking. NeoTCP supports multi-path using two sequence spaces - subflow packet sequence numbers and data sequence numbers like MPTCP. Unlike MPTCP, the data sequence number ack indicates the receiving application received the packet (with MPTCP, it only indicates reception by the receiving stack.

  6. Pulling. NeoTCP takes some lessons from NDP, and is a receiver-driven protocol. When several senders are sending to one receiver, this allows the receiver to choose precisely which senders to pull packets from at any time. This generalizes the QUIC/HTTP2 priorities to now support multiple different senders. You can use this to do aggregate congestion control for incoming traffic, avoiding self-congestion. It's up to the receiving application to determine priorities, and to the transport to decide how to use those priorities.

  7. Close is a total mess with TCP. It's even worse with a user-space protocol - your application may quit when close returns; data may not have been received yet and needs retransmitting, but there's no-one left to do it. By default, with NDP close won't return until the receiving application has received sent data. Obviously you need some way to avoid deadlock when the receiver died; that timeout is application specific.

  8. Finally, we've done a lot of work over the last couple of years on understanding MPTCP for web traffic. It's ugly. You have a whole load of short objects, and the application has a priority order for them. But the paths may have very different latency. If you send any packet from the highest priority object on the high latency path, you can delay the object significantly, and it can really hurt the overall page load time, as other requests get stalled waiting for that object. The best you can do is to run a per-packet scheduler, and ask the question "if I send the next packet from this object on the higher latency path, will it arrive after the rest of the object sent on the lower latency path?" If the answer is yes, you shouldn't send that packet on the higher latency path. Next you have to consider the second highest priority object, and ask if you should instead send a packet from that object on the higher latency path. You should, if will arrive before both the highest priority object and all the rest of the second highest priority object. If not, you shouldn't, and you should consider the third highest priority object, and so on. Obviously, this is complicated. At the very least, the transport protocol needs to know which packets are from which object, and what those object priorities are. At the receiver, objects may arrive out of order, but packets within an object must be in-order.

There are probably more things, but that's what comes to mind right now.


@csperkins csperkins added the discuss label Mar 21, 2018


This comment has been minimized.


chris-wood commented May 16, 2018

FWIW, I reached out to Mark to inquire about items 1-4, but did not hear back. @csperkins would you be willing and able to follow up?


This comment has been minimized.


britram commented May 16, 2018

@chris-wood to split this into multiple issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment