-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace HashMap implementation with SwissTable #56241
Conversation
r? @sfackler (rust_highfive has picked a reviewer for you, use r? to override) |
@bors try cc @rust-lang/infra, this should get a perf run as soon as the try build finishes (it's needed for the second day of impl days at rustfest rome) |
⌛ Trying commit cfd3225 with merge 5aaebb98d2620392ef4b74147a89b4ec1b024455... |
This comment has been minimized.
This comment has been minimized.
@bors try delegate+ |
✌️ @Amanieu can now approve this pull request |
⌛ Trying commit 03db6f4 with merge 73919a1b0a20e544eac9e8b30869b34170590d8e... |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
💔 Test failed - status-travis |
This comment has been minimized.
This comment has been minimized.
It seems that the new |
@Amanieu The workaround is to remove |
Oh, wait a second, this PR doesn't actually modify |
Did the previous |
@petrochenkov There are no ordering guarantees in I am currently trying to narrow down the exact hash map that is causing the issue. I can confirm that switching only Note that it is also possible that there is a bug in the new |
Fix: order of elements with a hash collision, not the general order.
Yes, it's better to bisect all uses of |
Ping from triage @sfackler have you had time to review this pr? |
@alexcrichton suggested that instead of copying hashbrown into libstd, we could have libstd import hashbrown directly as an |
Ping from triage @Amanieu / @alexcrichton: What are the plans for this PR? |
(still on medical leave) i do not think we should extern hashbrown without the review we would require to merge this code |
I would prefer to:
@gankro get well soon and don't stress yourself about the review ! |
I'm going to leave the PR as it is until the review is finished. I will be integrating any feedback back into the hashbrown repo. |
Really excited for this commit. Great work! |
Thanks for the update!
Couldn't agree more, get better soon! I'm marking this as blocked for the moment, so it doesn't show up during the regular PR triage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is exciting to see SwissTable making it into the Rust standard library! Thanks @Amanieu for doing all the work.
} | ||
fn make_hash<K: Hash + ?Sized>(hash_builder: &impl BuildHasher, val: &K) -> u64 { | ||
let mut state = hash_builder.build_hasher(); | ||
val.hash(&mut state); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One potential mitigation to the quadradic insertion bug is to always hash the pointer to the buckets as well as the key. This will completely eliminate the quadradic behavior on the merge_dos benchmark but it will slow down resizing the buckets array because it will effectively reshuffle all elements on every resize and this will have worse cache performance than the current approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered that but I don't feel comfortable mixing address bits into the hash key. I think that this could allow an attacker to defeat address randomization by inferring address bits from looking at the iteration order of a hash table.
src/libstd/collections/hash/map.rs
Outdated
// Gotta resize now. | ||
// Ideally we would put this in VacantEntry::insert, but Entry is not | ||
// generic over the BuildHasher and adding a generic parameter would be | ||
// a breaking change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't you move the reserve(1)
inside the else
block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the reserve could invalidate the result of the lookup if it rehashes the table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is ok if it does. In most cases there is no need to rehash. If a rehash happens, the cost of an additional probe round is insignificant. Right now this reserve(1) is breaking the API guarantees. reserve(N)
followed by N entry(K).insert(V)
will cause a rehash even though it is specified that it will not.
Edit: also the way this is coded you can defer reserve(1)
until the entry is inserted, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, you were right, this reserve call should indeed be in the else
block.
We can't defer the reserve call until the entry is inserted because Entry
would need to hold a reference to the hasher, which is not possible since in libstd Entry
is Entry<K, V>
instead of Entry<K, V, S>
and changing that would be a breaking change.
key: K, | ||
elem: VacantEntryState<K, V, &'a mut RawTable<K, V>>, | ||
table: &'a mut RawTable<(K, V)>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To get the VacantEntry
we already searched the table and we know exactly in which Bucket
we will insert. I think this struct needs to hold a Bucket
otherwise insert()
will need to probe again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are different constraints when searching for an element vs searching for an insertion slot. In particular, we ignore (skip over) deleted buckets in the former case while in the latter case we can insert on top of a deleted bucket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the confusing comment. I think the Entry
API allows two implementations here. On return:
VacantEntry
already points to the actual bucket andinsert()
is trivial.VacantEntry
caches the key and hash andinsert()
does the actual insert/grow dance.
In either case the optimal way to implement find_or_insert semantics needed by entry() -> insert()
is:
- find probe, if found return it (
insert()
won't be called) - insert probe, if bucket is deleted or if
growth_left > 0
return it (this probe costs very little because we are touching at most the same cachelines we touched in the previous probe - a deleted bucket can only make us return earlier) - rehash and do another insert probe to find where to insert it (the cost of an additional probe here is insignificant - we already did N probes to rehash the container and copied N key, value tuples)
// This may panic. | ||
let hash = hasher(item.as_ref()); | ||
|
||
// We can use a simpler version of insert() here since there are no |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason find_insert_slot()
can be used is because 1) we know there is enough space for all entries and 2) we know all items are unique.
I opened a new PR (#58623) which makes libstd depend on hashbrown from crates.io. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am back from leave and am resuming the review. This batch represents my review of RawTable, which contains a few serious correctness concerns.
#[inline] | ||
pub unsafe fn drop(&self) { | ||
self.ptr.as_ptr().drop_in_place(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless you need some kind of interior mutability, you should make the mutating methods take &mut self
just so &SomethingContainingABucket
can't accidentally do interior mutability things. Also just makes it clear to the reader that there isn't anything weird going on here.
} | ||
|
||
// Branch prediction hint. This is currently only available on nightly but it | ||
// consistently improves performance by 10-15%. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reminder that this is a thing (status change?)
I have addressed most of the feedback in this commit: Amanieu/hashbrown@5d1d4e9 Note that I'm not updating this PR any more, all future development towards integrating hashbrown into libstd is in these two PRs: |
Since this has gotten pretty out-of-sync with the actual code we should probably drop this review, and I'll just do an "offline" review of map.rs (since I expect it to be pretty straightforward, esp. since you moved most logic into the raw_table). Also I'm not sure how important handling size_of<(K, V)>() == 0 particularly gracefully can matter? There's no way to feed in state for hashing to be anything other than random, right? Like sure we shouldn't do UB, but otherwise I think we can spin our wheels or maybe even just panic? |
The handling of ZSTs should be fixed now (see my changes to I'm going to close this PR, any further reviews should be on one of the two PRs linked above. |
Replace HashMap implementation with SwissTable (as an external crate) This is the same as #56241 except that it imports `hashbrown` as an external crate instead of copying the implementation into libstd. This includes a few API changes (all unstable): - `try_reserve` is added to `HashSet`. - Some trait bounds have been changed in the `raw_entry` API. - `search_bucket` has been removed from the `raw_entry` API (doesn't work with SwissTable).
The implementation is from the hashbrown crate.
This is mostly complete, however it is missing 2 features:
try_reserve
-- DONEcc @pietroalbini @gankro