-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the hashtable.Map growThreshold #79
Comments
Yes, we cannot compare them with the same idea, as for example in slices. It's not really the growth of the hash table, but the growth in the number of buckets. Why is this necessary, because items can continue to be added to the chain indefinitely? Because the search time complexity will be O(n) in this case, but we want to support search and insertion for amortized O(1). This check is necessary to determine that there are too many items in the hash table for the current number of buckets. If there are too many items in the hash table, then you need to increase the number of buckets and recalculate the hashes of the items. This allows you to maintain O(1) time complexity. Also, when rehashing, a new seed is selected and the hashes of the items are recalculated, which means that the attacker needs to do the almost impossible things to carry out a hash collision attack. |
I see, so this is a strategy to lower the time complexity by in crease the number of buckets and descrease the length of chain when there are many nodes. |
Here define a
growThreshold
, if the total nodes in the map table above this value, it needs to resize the map table.growThreshold
defined asgrowThreshold := float64(tableLen) * bucketSize * loadFactor
, heretableLen
(len(t.buckets)
)is the buckets chain number(t.buckets[i]
is a chain), not the total nodes number, so we can't compare the chain number with nodes number like:if t.sumSize() > int64(growThreshold)
Am i right?
The text was updated successfully, but these errors were encountered: