-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bigcache short description #122
Comments
LGTM, but pinging original authors to clarify this @druminski @janisz @mxplusb |
LGTM 👍 |
LGTM 👍 |
I would add more information regarding the collisions. I had to dig in the code to understand how the collisions are handled. I think that I understand the trade-offs of handling collisions. I am not sure every application can ignore hash function collisions. Probably most applications can. I think that it is not hard to allow collisions (different key, same hash of the key) of the hash function. For example, if a call to Set() detects a collision I can increment the hash of the key and try to insert the hash again. I can try it a few times before I return a failure from the Set() or resize the map. In Get() I run a similar trial process: compare the requested and found keys until there is a match. |
@janisz ? :) |
@larytet what it the question? I don't get it? We do not handle collisions at all see: Line 605 in c00e2be
Your aproach is valid and popular in hashmaps implementations (see wiki). Let's imagine we get the same hash for every entry (the worst case scenario) then our complexity is Does it answer your question? |
My comment was about "Bigcache short description" (README?). I suggested to mention that the implementation ignores hash collisions. I find lack of any support surprising. It is not hard to provide some mechanism which reduces significantly the probability of collisions. The complexity will remain O(1) if the linear probing is limited by a small number of slots. In my naivety I expected the Set() to fail if there is a different key with the same hash already stored in the cache. This is not the case. For some applications it will be a show stopper. |
We are working on an article about the current state of concurrent cache implementations in Go. We are planning to include bigcache with following short description. Please let me know if my understanding is correct, or you would like to modify it.
The text was updated successfully, but these errors were encountered: