-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Privacy for connection IDs #598
Comments
As I stated at the interim, #2 terrifies me for the following reasons(mostly above):
Option 1 is not ideal either, but I believe it's vastly more deployable. |
Well, there is a third design, in which the server issues its public RSA Is this issue about the ways of doing this specifically, or are we |
On Tue, Jun 13, 2017 at 7:53 PM, Victor Vasiliev ***@***.***> wrote:
Well, there is a third design, in which the server issues its public RSA
key (or ECDH share), and the client encrypts its connection ID for every
packet. This is obviously impractical due to computational costs, but I
am not sure that either of proposed solutions is practical either. I
suspect there might be a cryptographic construction that will do that
too and is cheaper than that.
Yeah, I think this is pretty clearly impractical.
Is this issue about the ways of doing this specifically, or are we
discussing the threat model and whether it is useful here too?
I raised this issue to determine whether there was interest in doing this
if we
could make it practical. I got my answer to that so I am planning on coming
back
with some more specific ideas. Help welcome of course, but maybe not on this
issue.
—
… You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#598 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABD1oQj_0gIkoyG6Dne5gyVr5X8LoHO0ks5sDtq1gaJpZM4NyUsY>
.
|
I think option 1 is problematic: NAT rebinding on UDP ports is more
aggressive than TCP, so this basically breaks QUIC for NATs that do rapid
rebinding. This is problematic because it causes a connection failure --
the server will close the connection with a Public Reset packet after NAT
rebinding.
…On Mon, Jun 12, 2017 at 5:12 PM, ianswett ***@***.***> wrote:
As I stated at the interim, #2
<#2> terrifies me for the
following reasons(mostly above):
1. It's complex.
2. It's practically impossible to prevent connection hangs,
particularly in the presence of bursty loss.
3. It has substantial byte overhead.
4. I think it creates a form of memory attack, since you need to
buffer as many connection ids as supplied.
5. It requires a load balancer to do crypto, which add complexity and
CPU, as well as introducing another deployment challenge.
Option 1 is not ideal either, but I believe it's vastly more deployable.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#598 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AKjg1PwD8XbEDCbHURevmRDkuEjhBTfAks5sDWNggaJpZM4NyUsY>
.
|
I wonder whether we can separate the NAT rebinding case. We assume that these binding change in unpredictable ways, but they are mostly driven by timers. A very simple change to the current spec would be to not just burn a new connection ID on a "connection event", but also when the client is sending a packet after some N seconds of silence -- maybe 10 seconds. Pretty much the same as a keep-alive timer. |
This is unlikely to move in v1. I'm marking this v2. New information will be needed for us to move this back into consideration. Note that we do have considerable flexibility in the values that we choose to put in the connection ID slot, so it's not impossible to consider a scheme, but the multi-party nature of the communication involved (client-load balancer-server) and the constraints we set ourselves (size, computation) mean that this is currently in the too-hard basket. |
I and some others have been looking at how to protect the connection
ID from correlation between any two packets. The basic threat model
here is that it's possible to have unknown network transitions and so
if you just have the client initiate non-linkage events, then you
might have unknown linkage. Thus it should be, to the extent
practical, impossible to link any two packets. We can -- and I'm sure
will -- debate whether this is the right threat model, but the topic
of this email is just techniques.
There are two main sources of potential inter-packet linkage:
The current design, which allows you to occasionally change the
conn_id and then has a random PN increment, doesn't scale well to this
scenario.
I'm aware of two primary overall designs here, either of which will
work but neither of which is a complete no-brainer.
Omit the connection ID and then directly encrypt the packet number
with a per-connection key. This basically forfeits the automatic
connection mobility feature that the conn_id provides, because the
server/LB needs to use the 5-tuple to recover the key.
Have the server provide the client with a pool of connection
tokens, each of which is actually a wrapped version of the
(conn_id, PN) pair. The client uses one token per packet, and the
server/LB then unwraps the pair upon packet receipt. In order for
this to work, the server has to periodically replenish the pool,
though it's not a disaster if the client runs out, because we can
probably invent some way to reuse a token with a PN delta, though
at some privacy cost.
Neither of these designs is 100% ideal. The first isn't at all
consistent with automatic mobility or recovering from NAT rebinding,
which was the motivation for connection ID in the first place
(basically, you have no connection ID). It also may come at some
additional bandwidth cost because the PN is used to compute the
per-packet nonce, but at absolute worst it's 16 extra bytes per
packet.
The second doesn't require giving up mobility/rebinding resistance,
but comes at some additional costs, specifically (1) some protocol
complexity (2) bandwidth overhead because you need to send the tokens
in the reverse direction [best estimate is 16-24 overall additional
bytes on the wire in both direction] (3) the LB will have to do some
crypto to recover the connection ID (most likely one AES operation).
We have detailed designs for both of these, though it's possible they
can be optimized further. Happy to go into these, but I think it's
more useful to discuss this at a high level rather than going into
the detailed mechanisms at this time.
The text was updated successfully, but these errors were encountered: