-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Typo in get_impl? #6
Comments
Ditto for putIfMatch. |
There are a couple other race conditions as well. If this lib is actively used, I'm happy to report them, but I'd like to avoid typing them up if the effort would be wasted. |
Yes please do, it's in active use in a number of different places. |
Here's the other major "gotcha" cases I found. For reference, my C++ implementation is here and is what we're using in HyperDex now. The resize method makes a chain of inner tables. Although it's extremely unlikely, it's possible for the recursive I also thought the counter implementation was racy during a resize, but it looks like it's doing the right thing. |
The other issue I forgot about and didn't include was the "clear" call. It doesn't behave well with resizes, especially stacked resizes. I opted to remove it completely. |
I believe there is a typo in
get_impl
here: https://github.com/boundary/high-scale-lib/blob/master/src/main/java/org/cliffc/high_scale_lib/NonBlockingHashMap.java#L540The line should instead read
K == TOMBSTONE
.You'll note that
key
is what the user passed in, and users should never try to retrieve aTOMBSTONE
. In fact, I think Java's type safety prevents them from even getting a reference to theTOMBSTONE
.This typo can effect the safety and efficiency of the
get
operation as the hash table is no longer linearizable. A write, that is then marked with aTOMBSTONE
and copied to the new table will be set toTOMBSTONE
. If the copying and the get race, the copy could see anull
and return thenull
, even though it should instead begin looking in the next table. It's a small race, but it's there.It's also less efficient to reprobe up to
reprobe_limit
on larger tables, but what's a few extra cycles among friends ;-).The text was updated successfully, but these errors were encountered: