Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vulnerability to collision attacks #319

Open
sjakobi opened this issue Sep 13, 2021 · 37 comments
Open

Vulnerability to collision attacks #319

sjakobi opened this issue Sep 13, 2021 · 37 comments

Comments

@sjakobi
Copy link
Member

sjakobi commented Sep 13, 2021

@NorfairKing's blog post describes a HashDoS attack on aeson which is enabled by u-c's O(n) handling of hash collisions and use of the same default salt for all hashing operations.

While possible mitigation measures for aeson are discussed in haskell/aeson#864, u-c should also prepare a proper response and possibly implement security features for users who are affected via aeson or in other ways.

In particular, I hope to find answers for the following questions in this thread (and possibly in separate sub-threads):

  1. What mitigation measures can affected u-c users enable in the short term?
  2. Is security against collision attacks a design goal for u-c?
    2a. If yes, to what extent should we trade performance and API bloat for security features?
  3. What mitigation measures should be implemented in u-c?

I'd also like to point out that I have very limited knowledge and experience with security issues, so I'd be very grateful if more experienced people could chime in and share their advice. :)

@sjakobi
Copy link
Member Author

sjakobi commented Sep 13, 2021

  1. What mitigation measures can affected u-c users enable in the short term?

A few ideas – maybe some more experienced people can comment whether these are any good:

  • Switch to a collision-resistant hash function. I'm aware of SipHash, but there may be more suitable hash functions. This should make it harder for an attacker to produce colliding keys.
  • Build hashable with -frandom-init-seed, to make it slightly harder to produce colliding keys. The hashable maintainer doesn't recommend this, but it should be somewhat useful anyway. See this discussion on r/haskell.
  • Limit the size of any HashMaps and HashSets. When n is small, O(n) operations are less problematic.
  • Use Data.Map and Data.Set from containers instead of this package. These use the Ord methods for performing lookups and insertions, and therefore aren't vulnerable to collision attacks.
  • Experimental: Use the hashmap package which relies on Data.Map for storing any collisions. Note that this package doesn't offer a Strict API that ensures that map values are evaluated to WHNF. Maybe @RyanGlScott, @augustss or other users can comment in which cases this package is a suitable replacement for u-c.

EDIT 2020-10-03:

  • Build hashable with -frandom-init-seed, to make it slightly harder to produce colliding keys. The hashable maintainer doesn't recommend this, but it should be somewhat useful anyway. See [this discussion on r/haskell]

Note that a random hash salt has very limited security benefits as long as a weak hash function like FNV is used. For FNV, it is possible to construct multi-collisions that collide no matter what salt they are hashed with: https://medium.com/@robertgrosse/generating-64-bit-hash-collisions-to-dos-python-5b21404a5306

@dhess
Copy link

dhess commented Sep 13, 2021

Thanks for looking into this!

My only bit of feedback is this: because there are existing, deployed services that are vulnerable to this attack, and because shutting those services down or replacing them with something that doesn't use aeson is not feasible, a timely short-term fix is just as important as whatever long-term, proper fix the various parties come up with.

In other words, please let's not let perfect be the enemy of good-enough-for-now here. Even something that makes producing collisions slightly more difficult is helpful at this stage, IMO.

I'm encouraged that you've jumped right into mitigations in your first 2 posts, so it looks like this discussion is off to a great start!

@NorfairKing
Copy link

NorfairKing commented Sep 13, 2021

In other words, please let's not let perfect be the enemy of good-enough-for-now here.

  • -frandom-init-seed is good enough for the very short term (AFAICT)

  • The collisionless containers approach solves the specific exploit that I've built, but doesn't guarantee that there's no other cheap way to produce an exploit.

@treeowl
Copy link
Collaborator

treeowl commented Sep 13, 2021

Is there a way to mitigate the performance degradation of a random seed? How bad does it measure out?

@brandon-leapyear
Copy link

brandon-leapyear commented Sep 13, 2021

This is an old work account. Please reference @brandonchinn178 for all future communication


I also opened this issue for another possible mitigation: haskell-unordered-containers/hashable#218

Is there a way to mitigate the performance degradation of a random seed? How bad does it measure out?

IIUC it's not that a random seed will reduce performance, it's that one would get different hash values every time one restarts the application

@NorfairKing
Copy link

More ideas:

  • IMHO we should also seriously consider the collisionless containers idea (in the long term)
  • A createHashmap :: IO (HashMap k v) api where the hashmap stores its own (randomly generated) salt

@brandon-leapyear
Copy link

brandon-leapyear commented Sep 13, 2021

This is an old work account. Please reference @brandonchinn178 for all future communication


+1 to the createHashMap idea, in addition to createHashMapWith :: Int -> HashMap k v to get deterministic seeds (e.g. for getting deterministic results for tests)

@ysangkok
Copy link

Why should the function be in IO? We already have a class that captures needing random state, it is RandomGen. A PRNG would be good enough to solve this problem, real randomness is not needed.

@treeowl
Copy link
Collaborator

treeowl commented Sep 13, 2021

I also opened this issue for another possible mitigation: haskell-unordered-containers/hashable#218

Is there a way to mitigate the performance degradation of a random seed? How bad does it measure out?

IIUC it's not that a random seed will reduce performance, it's that one would get different hash values every time one restarts the application

A random seed will most likely be implemented something like this:

the_random_seed :: Seed
the_random_seed = unsafePerformIO ...
{-# NOINLINE the_random_seed #-}

This means that every (initial) seed access has to check a tag and follow a pointer. The seed will almost always be in L1 cache when heavy HashMap use is happening, but we should check that it's not too bad to access it.

@sjakobi
Copy link
Member Author

sjakobi commented Sep 13, 2021

Here's some earlier discussion with @tibbe and @infinity0 on mitigating collision attacks: #265 (comment)

The gist is that in order to make it sufficiently hard for an attacker to produce hash collisions, you need both a strong hash function like SipHash and a random seed.

The problem with SipHash is that apparently it's so slow that you might as well switch to Data.Map – at least that's what was mentioned in our internal discussions. Nevertheless, a hashable patch that uses SipHash for the Text instance seems like a reasonable short-term mitigation measure when combined with -frandom-init-seed.


Regarding the proposed fix in #217, @tibbe's assessment was that it would still require a strong hash function and a random seed to be reasonably secure. With many weaker hash functions, it is possible to generate seed-independent collisions, see e.g. this blog post. By this assessment, #217 "adds" little security of its own.

@sjakobi
Copy link
Member Author

sjakobi commented Sep 13, 2021

  • A createHashmap :: IO (HashMap k v) api where the hashmap stores its own (randomly generated) salt

Storing the salt within the HashMap was proposed in #45. Maybe we can use that issue to discuss the details of this idea.

@treeowl
Copy link
Collaborator

treeowl commented Sep 13, 2021

Storing salt in a map leads to big problems with intersection, union, and difference.

@Vlix
Copy link

Vlix commented Sep 13, 2021

If you'd store the salt in the map, it would also be a good idea to have a hashMapToSalt :: HashMap k v -> Int so you could create new maps with the same salt for doing intersection, union, and difference?
Though I guess then you might also need a fromListWithSalt :: Int -> [(k, v)] -> HashMap k v, or fromListWith/foldInto :: HashMap k v -> [(k, v)] -> HashMap k v

@tibbe
Copy link
Collaborator

tibbe commented Sep 13, 2021

Hit-and-run comment: you often need to have a secure hash function even in libraries that you don't control (e.g. in some http library) so explicitly creating hash maps using a Text newtype or an explicitly-passed seed isn't always an option.

@sjakobi
Copy link
Member Author

sjakobi commented Sep 15, 2021

2. Is security against collision attacks a design goal for u-c?
2a. If yes, to what extent should we trade performance and API bloat for security features?

Here are my current thoughts on these questions, much inspired by the internal discussion with @tibbe:

u-c is primarily designed for high performance. The most popular alternatives to u-c's HashMap and HashSet are Data.Map and Data.Set from containers, which are safe against collision attacks by design, and also more ergonomic: toList reliably returns its elements in the same ascending order.

Therefore most users will only choose u-c when it offers better performance than containers, or when their key type has no Ord instance.

So, while I think that making u-c safer to use with untrusted inputs is desirable, work towards that goal only makes sense as long as the performance advantage over containers can be retained.

Therefore, for any security schemes that require a strong hash function, we'll first need to find a hash function that is fast enough. The SipHash versions/implementations that have been tried so far were reported to be too slow.

For the choice of hash function I'll need to rely on expert opinions: Are, say, Blake3 or HighwayHash strong enough? For the performance testing we'll need benchmarks.

@sjakobi
Copy link
Member Author

sjakobi commented Sep 15, 2021

What about mitigation measures that do not require a strong hash function?!

The main idea that I'm aware of is to store collisions in a Data.Map, so lookups are O(log n) even in the worst case. The main downside to adopting this approach in u-c is that we'd have to add Ord constraints for the keys. u-c would become unusable with key types that don't have an Ord instance.

Therefore I think that, rather than adopting this approach in u-c, interested people should rather try to revamp the hashmap package, which already uses this approach.

@sjakobi
Copy link
Member Author

sjakobi commented Sep 15, 2021

I've opened #320 to add a security advisory to the package description. We can update the recommendation within once we have a better story for mitigating collision attacks.

@treeowl
Copy link
Collaborator

treeowl commented Sep 15, 2021 via email

@PaulJohnson
Copy link

How about using two hash functions?

The top level hash table uses a fast but vulnerable hash. Normally you expect a small number of collisions and you can resolve these by linear search in the list of colliding values.

If the number of entries in a collision exceeds some threshold you start storing them in a new hash table which employs a slow but collision resistant hash. The threshold is chosen so that the time to perform a linear search is equal to the time needed to compute the slow hash value.

At this point the upper threshold on the CPU time for a new value is the time required for the slow hash function, but non-pathalogical inputs will still work fast.

@treeowl
Copy link
Collaborator

treeowl commented Sep 15, 2021

@tibbe, I think @PaulJohnson's comment sounds extremely reasonable. How do you think that'll affect Hashable integration?

@sjakobi
Copy link
Member Author

sjakobi commented Sep 15, 2021

@PaulJohnson I think this sounds interesting!

I guess this would involve Hashable gaining a hashStrong method or similar. (CC @phadej)

I wonder what the implementation in u-c would look like. You'd need to distinguish the outer and inner HashMap types, since I assume only the outer map would have the special collision handling.

I believe you'd still need to use a random seed, since with a known seed, an attacker could still pre-compute colliding inputs.

@phadej
Copy link
Contributor

phadej commented Sep 15, 2021

I don't see a need for u-c to become usable in hostile environments. It forces to make trade-offs. I hope that u-c concentrates on being fast(er then Map), otherwise it becomes useless package. The same applies to hashable.

aeson will allowing users to switch to ordered Map representation, which probably don't even cause considerable performance hit.


@PaulJohnson's doesn't work IMO. A nasty person will use the FNV colliding inputs forcing hashmap to degenerate to be just a normal hashmap with stronger hash function. This probably won't DDoS your system, but it will burn more CPU than needed.

Someone who has time (seems to be plenty of people) could benchmark what would happen if say sha256 or blake is used in HashMap versus Map.

At which point there is payoff.

The last time I did benchmarks myself, for ordNub vs hashNub https://oleg.fi/gists/posts/2021-01-07-discrimination-benchmarks.html ordNub the n have to be very big (and both comparison and hash of e.g. UUID are fast. Note how linear time discrimination nub is slower then O(log n) ordNub - constant factors matter).

Recall, the advisory input used in the blog post was 5M, that's something easy to limit too. (your API request should fit in say 100k, data upload can be off-side).

@PaulJohnson
Copy link

PaulJohnson commented Sep 16, 2021

@sjakobi points out that an attacker could pre-compute colliding inputs.

That looks like a major issue, which makes the whole "collision resistant hash" thing moot. Suppose we used a cryptographic hash function for maximum resistance. That hash still has to be reduced to N buckets. If you know N and you know the reduction algorithm then you can find collisions in N just by exhaustive search without needing to find collisions in the hash function as a whole. If N=100 then 1% of your trials will produce a desired collision, so producing a few million is not going to be difficult. In fact given the O(n^2) collision algorithm in u-c an attacker with a graphics card could probably do it faster than the target machine can process the data.

Lets suppose that my two-level hash idea is implemented, with only the strong hash having a random salt. Now the result of toList will be almost deterministic, but not quite; sometimes the strong hash gets used with a salt, and at that point the output becomes non-deterministic. That sounds like a recipe for heisenbugs in applications, which would be a Bad Thing. Better just to make the whole hash table non-deterministic and have done. That way if someone depends on a consistent order from toList they will notice the problem immediately.

I therefore withdraw my two-level hash proposal.

This also means that we need to think about how easy it would be to recover the salt by observing the ordering in toList. If an attacker can recover a random salt then they can perform an attack.

@PaulJohnson
Copy link

@phadej suggests that vulnerable machines can limit their vulnerability by limiting the size of the input.

Unfortunately that isn't really a solution. It is true that limiting the input to 100k down from 5M would make the problem 50^2 = 2,500 times smaller, but if you can send lots of 100k JSON items then you can still perform an effective DoS attack. And on the other hand arbitrary size limits on inputs tend to create other headaches for software that automatically generates data to send. So at best its just a workaround.

@phadej
Copy link
Contributor

phadej commented Sep 16, 2021

it's a pragmatic workaround. Unbounded input is always a security concern. Rate-limiting is good idea as well (all APIs i had used recently are either rate-limited, charge per request or even both).

Here we try to solve some very specific problems (not everyone has) in a wrong place, IMO. The need for hardened O(1) lookup associative table feels contrived. What's wrong with Map in these environments?

@tibbe
Copy link
Collaborator

tibbe commented Sep 16, 2021

I don't actually have much time on this so I'll try to weigh in one more time but leave the discussion and the decisions to the current maintainer(s):

  • As others have pointed out, u-c needs to be significantly faster (and ideally use less memory) than containers otherwise it serves no purpose whatsoever (which some tiny caveat around Ord instances).

  • A guiding principle, especially when dealing with security topics where it's easy to have a solution that looks like it's secure but turns out to be, is to look at prior art. What did other language communities do here? Why? I guess this is kinda like "don't write your own crypto".

    • In particular, why didn't all hash tables just add some red-black trees for their collisions years ago? Why didn't they do it when various attacks surfaced (around the time of SipHash)?
  • Input size limitation is always a good thing but, if I recall correctly, the original SipHash paper mentioned that you can create a large collision with not a whole lot of data (e.g. the amount of data you could fit in a normal JSON payload size of an HTTP POST).

  • Adding Ord to the API is harder than it looks. It's not just that it could break direct users of u-c. Other libraries that rely on it might have to add Ord to their APIs and so on.

    • In general, when making breaking API changes, assume you have underestimated the consequences of doing so.
  • I wish the "just use Map" advice would work in all case (and it does in some) but unfortunately you might end up with a situation where some library, the author of which didn't consider security implementations or just didn't think the library would be used in a setting where it might receive untrusted inputs, uses a HashMap internally.

    For example, you might have some MIME types library that internally uses a HashMap and then some web server uses that library and thus becomes vulnerable. Maybe not the best example as MIME types are probably often used in web servers but you get the idea.

@tibbe
Copy link
Collaborator

tibbe commented Sep 16, 2021

To add to the last point: one of the real challenge here is to fix the problem with DOS attacks due to hash collisions in code that didn't know it would have to care about such things. This is why the solution we (me and @bos) attempted tried to

  • use a global random salt, instead of having users try to pass one in the API,
  • always use a strong hash function (SipHash).

Unfortunately it didn't work out (we had performance issues and segfaults in trying to deal with them with low-level coding).

@jappeace
Copy link

jappeace commented Sep 20, 2021

I had an idea for doing a per HashMap salt which I've not seen in this discussion yet:

-- same constants as hashable.
#if WORD_SIZE_IN_BITS == 64
type DefaultSalt = -2578643520546668380  -- 0xdc36d1615b7400a4
#else
type DefaultSalt = 0x087fc72c
#endif

data HashMapT (salt :: Nat) k v
     = Empty
     | BitmapIndexed !Bitmap !(A.Array (HashMapT salt k v))
     | Leaf !Hash !(Leaf k v)
     | Full !(A.Array (HashMapT salt k v))
     | Collision !Hash !(A.Array (Leaf k v))
       deriving (Typeable)
 
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt

Then modify the functions to be free of salt if they can, for example insert:

insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)

This allows the default HashMap with backwards compatibility, but also any other HashMapT
I think this solves the issue with having different salts in an intersect:

intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v

Because salt is the same type variable for all arguments, it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends up with different salts and still wants to do an intersect. (which results in a reconstruction of the hashmap).

@treeowl
Copy link
Collaborator

treeowl commented Sep 20, 2021 via email

@phadej
Copy link
Contributor

phadej commented Sep 20, 2021

... and then every use-site should propagate the salt, e.g. aesonwould have data Value salt = Object (HashMap salt Text (Value salt)? Looks like ST s to me. Doable, but not very convenient.

@jappeace
Copy link

@treeowl
I'll try it out, I think I can at least constrain the size of Nat to be at most a maxBound Word, and then (maybe) proof I get only the unboxed branch. (not a bigNat). But even if it wouldn't work and it's boxed, I kind of want to see how bad it gets.

@phadej
Yes, If salts are generated at runtime they need to be into a closure because of the pattern matching on SomeNat from someNatVal
But a client could also generate them at compile time and put them at typelevel with TH (so usage becomes like DefaultSalt).

jappeace added a commit to jappeace/unordered-containers that referenced this issue Sep 22, 2021
This allows clients to create custom salted hashmaps.
For backwards compatibility we use

```haskell
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt
```

Then modify the functions to be free of salt if they can, for example insert:

```haskell
insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)
```

This allows the default HashMap with backwards compatibility,
but also any other HashMapT
I think this solves the issue with having different salts in
an intersect:

```haskell
intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v
```

Because salt is the same type variable for all arguments,
it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends
up with different salts and still wants to do an intersect.
(which results in a reconstruction of the hashmap).

See thread: haskell-unordered-containers#319
jappeace added a commit to jappeace/unordered-containers that referenced this issue Sep 22, 2021
This allows clients to create custom salted hashmaps.
For backwards compatibility we use

```haskell
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt
```

Then modify the functions to be free of salt if they can, for example insert:

```haskell
insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)
```

This allows the default HashMap with backwards compatibility,
but also any other HashMapT
I think this solves the issue with having different salts in
an intersect:

```haskell
intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v
```

Because salt is the same type variable for all arguments,
it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends
up with different salts and still wants to do an intersect.
(which results in a reconstruction of the hashmap).

See thread: haskell-unordered-containers#319

Fix the defaultHash issues

Be more verbose about default value
jappeace added a commit to jappeace/unordered-containers that referenced this issue Sep 22, 2021
This allows clients to create custom salted hashmaps.
For backwards compatibility we use

```haskell
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt
```

Then modify the functions to be free of salt if they can, for example insert:

```haskell
insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)
```

This allows the default HashMap with backwards compatibility,
but also any other HashMapT
I think this solves the issue with having different salts in
an intersect:

```haskell
intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v
```

Because salt is the same type variable for all arguments,
it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends
up with different salts and still wants to do an intersect.
(which results in a reconstruction of the hashmap).

See thread: haskell-unordered-containers#319

Fix the defaultHash issues

Be more verbose about default value

Add comma
jappeace added a commit to jappeace/unordered-containers that referenced this issue Sep 22, 2021
This allows clients to create custom salted hashmaps.
For backwards compatibility we use

```haskell
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt
```

Then modify the functions to be free of salt if they can, for example insert:

```haskell
insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)
```

This allows the default HashMap with backwards compatibility,
but also any other HashMapT
I think this solves the issue with having different salts in
an intersect:

```haskell
intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v
```

Because salt is the same type variable for all arguments,
it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends
up with different salts and still wants to do an intersect.
(which results in a reconstruction of the hashmap).

See thread: haskell-unordered-containers#319

Fix the defaultHash issues

Be more verbose about default value

Add comma

Fix CI maybe

link to source of magick number

Fix default hash assertion
jappeace added a commit to jappeace/unordered-containers that referenced this issue Sep 22, 2021
This allows clients to create custom salted hashmaps.
For backwards compatibility we use

```haskell
 -- backwards compatibility
type HashMap = HashMapT DefaultSalt
```

Then modify the functions to be free of salt if they can, for example insert:

```haskell
insert :: forall k v salt . (Eq k, Hashable k) => k -> v -> HashMapT salt k v -> HashMapT salt k v
insert k v m = insert' (hash salt k) k v m
  where
    salt = natVal (Proxy :: Proxy salt)
```

This allows the default HashMap with backwards compatibility,
but also any other HashMapT
I think this solves the issue with having different salts in
an intersect:

```haskell
intersection :: (Eq k, Hashable k) => HashMapT salt k v -> HashMapT salt k w -> HashMapT salt k v
```

Because salt is the same type variable for all arguments,
it's enforced to be the same.
Then you can also provide a function to resalt if the user ever ends
up with different salts and still wants to do an intersect.
(which results in a reconstruction of the hashmap).

See thread: haskell-unordered-containers#319

I know these libraries will at least fail with the changes because they have instances on `HashMap`,
which now need to be `HashMapT salt`:

quickcheck-instances-0.3.25.2.drv
semirings-0.6.drv
relude-0.7.0.0.drv
semigroupoids-5.3.5.drv

I did this by stubbing out unordered containers in a 100k loc codebase
to see what issues would come up in CI.
@jappeace
Copy link

For lack of better ideas I implemented this: #321

@sjakobi
Copy link
Member Author

sjakobi commented Sep 27, 2021

I have rekindled #265 to discuss approaches for making the hash salt less predictable to attackers.

I'm reluctant to invest much time into that debate while I'm not aware of a hash function that would make the whole fuss worthwhile though (see #319 (comment)).

If anyone's interested, I think finding an appropriate hash function would be the best way to make progress on this issue.

I noticed that rust currently uses SipHash 1-3:

The default hashing algorithm is currently SipHash 1-3, though this is subject to change at any point in the future. While its performance is very competitive for medium sized keys, other hashing algorithms will outperform it for small keys such as integers as well as large keys such as long strings, though those algorithms will typically not protect against attacks such as HashDoS.

@tibbe, is that the same SipHash variant that you tried? https://en.wikipedia.org/wiki/SipHash#Overview indicates that SipHash 2-4 might be more common, but also slower.

@jappeace
Copy link

jappeace commented Sep 27, 2021

I did some digging, spihash was still available in hashable in 2012: https://github.com/haskell-unordered-containers/hashable/tree/fea8260b9e0c0596fc7ef0c608364b3960649f26/cbits

It's quite easy to change that code to be spihash-1-3 (the numbers just indicate the loops of hashing/finilization).

I don't know how recent the SipHash implementation was tested for performance, but it looks relatively easy to add it back into hashable, and run the benchmarks once more. Perhaps it would be possible to speed it up? maybe we can try?


sip hash reference implementation (paper explaining it is linked there as well)

@sjakobi
Copy link
Member Author

sjakobi commented Sep 29, 2021

@jappeace Good find! Yeah, it would be interesting to set up benchmarks with that code.

The HighwayHash project also contains a supposedly faster SipHash implementation that we could give a spin.

@jberryman
Copy link
Contributor

A little OT, but in case it comes up and since I haven't wrote about this anywhere: I started a project a few years ago named hashabler that aimed to do a few things:

  1. make hashable modular by separating choice of hash function from the byte-stream-supplying code (i.e. the Hashable instance)
  2. Document what makes the latter instances principled, and supply such instances
  3. Define some efficient implementations of alternative hash functions (SipHash among them)
  4. Offer a stable hash for certain types, suitable for serializing and storing and comparing across platforms

Unfortunately I realized I didn't really understand (2) until after I'd made a couple releases and towards the end of a big rewrite, etc. I managed to get the library about 2/3 of the way fixed up but lost steam and haven't had the energy to pick it up since.

But (2) is interesting and I think not very well-understood. But the short version is a composable hashing library like hashable essentially needs to do the same work as a serialization library: the byte-supplying function (Hashable instance) needs to represent a uniquely decodable code ().

The point being you can get collisions from your choice of hash function, but also from the Hashable instance itself even in the presence of a perfect hash function (well, in theory. The obvious bad instances in hashable itself like (IIRC) ([a], [a]) were fixed a long time ago, but perhaps in an ad hoc way, and without documenting how user should avoid the same mistake)

FWIW I'd like to brush that work off some day. In the interim I've had the thought that I could use backpack to allow the choice of hash function to be configurable, without it needing to be part of the regular public API (so e.g. containers could use it transparently)

You're welcome to steal with attribution my siphash implementation here if helpful: https://github.com/jberryman/hashabler/blob/master/src/Data/Hashabler/SipHash.hs . It looks like in my version locally I've got a somewhat faster implementation that uses handwritten asm (only because ghc doesn't have bitwise rotate primops)

@sjakobi
Copy link
Member Author

sjakobi commented Oct 9, 2021

For reference: aeson users can now avoid this vulnerability by enabling aeson's new ordered-keymap flag which makes aeson use Data.Map.Strict for storing JSON objects: https://hackage.haskell.org/package/aeson-2.0.1.0/changelog

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests