-
-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mitigate hash flooding attacks #171
Comments
I feel like this is not a concern. The correct way to prevent any DoS attack is to distribute computational time fairly between clients, so that any client can only slow down service for itself. |
About your claim of "pure" functions, you can make nondeterminism contained inside each hash table. Basically, hash of any object in Roc is deterministic, but each hash table (default implementation) has a secret key that even the language user cannot access. Inside a hash table, the actual key used to decide which bin to put the object in is One important point: preserve the order of objects inserted, so iteration of hash table is deterministic. |
What are the ordering requirements for a standard hashmap going to be, or is the only guarantee a per-type determinism (for referential transparency)? |
Another mitigation in the http case is using a record internally for standard or ubiquitous headers (Host, User-Agent, Authentication, Content-Type, etc), leaving a hashmap only for lesser used headers. At the protocol level, iirc, http2 has a standard compression dictionary for such headers so that the server's compressed stream doesn't even need to communicate that "this bit sequence means "Host", as any client will already have a "codebook" for that (my term, don't recall actual). So iow, unrolling header maps is already literally standard practice. Such a record could be exposed, but in any case header lookup by name could use a Finally, there would be some value in a hypothetical Roc HTTP server platform in requiring that endpoint handlers declare the headers (and query params and body format, and same for the response) that it will or even can look at, primarily to allow making the contract very visible, and convertible and statically verifiable to and from OpenAPI, but also finally letting us break away from writing endpoint handlers deal with low-level IO concerns, not to mention allowing for some optimizations, like skipping over decoding or storing headers that neither the platform nor the app's handler will even use. |
Cool, I hadn't heard of that technique - makes sense! That should improve performance in general, although unfortunately I don't think it would help with hash flooding in particular. Essentially all the collisions would necessarily be made-up headers anyway (since if common headers collided, this wouldn't need to be an attack; it would just happen all the time naturally!) and since the uncommon headers would all end up in the "lesser-used headers" hashmap, we'd be in the same situation. (Of note, just creating the "lesser-used headers" hashmap would still result in lots of collisions and lots of linked lists being created, if linked lists were used for collision resolution...so the concern still stands in that scenario!) That said, I think the idea of parsing common headers directly into a record instead of into a hashmap makes a ton of sense, and I want to use it in a HTTP server platform! 😃 |
This is also a cool idea! In case it ends up ruling out some important but rare use cases, maybe the platform could offer something like "here's the raw un-parsed header, feel free to parse it into something else." |
Latest idea: apparently what Java 8 does is to mitigate hash flooding at the data structure level (rather than the hash function level) as follows:
This seems like a strategy that could work well for Roc. It could mean:
|
Of note, Java 8 uses linked lists for collision resolution normally, so falling back on a different data structure seems straightforward. However, we'd likely want to use more efficient hash maps which didn't use linked lists, so "fall back on a tree" wouldn't necessarily be as straightforward. However, the tree could be a pointer stored on the heap (after the refcount, for example) which initially is null but then we make an allocation for it after the first time we have 4 collisions. Alterntively, it could be a Vec of pointers to ordered maps, and each colliding key could allocate its own map and push it onto the Vec for later deallocation. If we don't have a natural way to access collision counts, I assume there has to be a way to calculate that which wouldn't have to be stored in the data structure. For example, maybe we could to the "resolve a collision" function which is "number of collisions encountered so far" and each time we call it to find another slot, we increment that argument. |
Note that non-determinism doesn’t have to be observable: you probably still want ASLR for Roc. Similarly, a hash map can maintain well-defined iteration order, but still used randomization internally. That’s what new versions of Python and JS do: store two slices inside, one values in insertion order, the other indexes in hash-map order.
My gut feeling is that Rust is quite suboptimal here, and that basically no one bothered to replace SipHash with something faster and still secure enough. Which is surprising, but I don’t really remember any substantial effort to find something faster. I’d look at Go. Well, I have looked at Go, https://go.dev/src/runtime/asm_arm64.s (search memhash) and they seem to use AES hardware instructions, which I’d expect fast hash to use, and, well, structurally Go should care a lot about both DoS resistance and perf of default hash function (in contrast, structurally Rust cares about the perf of non dos resistant hash, bc that’s what both Rust and FF use: https://nnethercote.github.io/2021/12/08/a-brutally-effective-hash-function-in-rust.html) |
This is actually what we ended up going with, and in fact it uses ASLR under the hood to seed the randomness! 😃 Full credit to @bhansconnect for that idea. 🎉 Speaking of which, I guess we can close this issue now? |
The main goal of this issue is to make sure we don't forget to take this into account when implementing Roc's standard hash maps! It can be closed when we have a hash flooding mitigation strategy in place in the standard library.
Hash flooding mitigation is a complicated issue, but I did a bunch of investigation and thinking about it in 2019, and I want to record some thoughts here to get them out of my head. This can all be revisited when we get to adding an actual hashmap implementation to Roc.
Hash Flooding Attacks
So hash flooding attacks are a thing. The basic idea is to DoS a web server by sending it HTTP payloads which are malicious in a very specific way: the attacker knows the server will parse those payloads into hash maps, and the payloads are designed to produce so many hash collisions that the hash map becomes effectively a linked list.
Lookups into linked lists are super slow compared to hash lookups, and there's apparently been a proof-of-concept of being able to lock up an entire server (Rails or PHP or something - I forget) with a single POST. This suggests an individual could theoretically DoS a vulnerable server without needing a whole botnet to turn DoS into DDoS.
Hash flooding attacks have been demonstrated in proof-of-concept laboratory attacks against real-world systems (e.g. Perl, Rails), but I have not been able to find any documented cases of a real production system actually being hit by one. Nevertheless, I'm convinced the concern is worth taking seriously.
Hash Flooding Mitigation: SipHash
In this talk the people behind some of the proofs-of-concept (and also someone who writes software for DNS servers, which are by default vulnerable to these attacks) talk about defending against hash flooding attacks.
The mitigation strategy they recommend is to use the SipHash hashing algorithm, which was published by researchers working on these hash flooding proofs-of-concept. SipHash has these characteristics:
I really want to avoid having random hashing in Roc. We can get a lot of benefits from everything in the language being a pure function - in terms of caching, reproducibility of test runs, etc. - and using a randomly seeded hasher would violate some of those assumptions. For example, if a different seed were chosen before every test run, then we'd have tests that are both "pure" and also potentially flaky.
Rust uses SipHash by default.
Hash Flooding Mitigation: Tree Conflict Resolution
The arguably more robust defense against hash flooding attacks is to do collision resolution using a tree rather than a linked list. This way, instead of the collisions causing the hash table to degrade to be effectively a linked list, they cause it to degrade to be effectively a tree - which of course does not take nearly as long to search.
In the talk, this strategy is acknowledged extremely briefly but dismissed as something that isn't really used in practice, because hash map implementors don't want to bother implementing tree-based collision resolution because trees are more complicated than linked lists.
Tree-based resolution sounds worth the effort for Roc. It sacrifices neither referential transparency nor performance - because we can put it behind a conditional based on number of collisions so far (e.g. don't switch from linear probing to a tree until after like 16 collisions, which suggests something fishy is happening) that can be correctly branch predicted basically every time outside of an actual hash flood attack.
One reason Rust might not do hash flooding defense this way is that in order to have an efficient tree, you need an
Ord
as well as aHash
implementation for the keys. Rust might not want to impose that restriction on allHashMap
users, but in Roc we can automatically derive one with no extra work on the part of end users anyway.Hash Flooding Mitigation: Re-Hashing
I looked into this a bit at one point, and it seemed that re-hashing could also be a way to mitigate hash flooding attacks. That wouldn't even require an ordering constraint. I haven't investigated it deeply though!
Hash Flooding Mitigation: Outside the Hash Map
There are various server-specific ways to defend against hash flooding. For example, to defend against hash flooding attacks using HTTP headers specifically, you could cap the number and length of HTTP headers, such that after encountering excessively long or excessively many headers, the request gets rejected immediately. This sort of thing should probably be done regardless (along with timeouts), as other DoS attacks rely only on sending lots of data.
However, assuming tree-based conflict resolution can be done at the insignificant runtime cost of one always-correctly-predicted branch (plus some code size), that would benefit platform and application authors alike in that they would be "default safe" in case they overlooked a particular attack vector.
Design Idea
It seems like a reasonable idea to start with
hashbrown
- which is what Rust used to speed up its standardHashMap
, and which is based on Google's highly optimized SwissTable implementation. From there, we'd add theOrd
requirement and the extra logic to fall back on a tree after a certain number of collisions.Alternatively, we could avoid the
Ord
requirement by using rehashing. The basic idea would be that after a suspiciously high number of collisions (perhaps eight, for example) we redo the hash calculation passing a tuple of(val, salt)
instead of justval
, in hopes that his will no longer collide. (The salt could be - for example - the total number of collisions we'd encountered so far, or in a linear probing scheme, the most recently tried bucket index.) The idea is that it would be infeasible to find a payload which not only triggered the initial collisions, but which also somehow continued to collide after salted rehashing.The text was updated successfully, but these errors were encountered: