-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dict viewkeys intersection slow for large dicts #71762
Comments
In dictobject.c, the function dictviews_and performs a very expensive set creation operation on the keys of the self dict object: From Python 2.7:
...
static PyObject*
dictviews_and(PyObject* self, PyObject *other)
{
PyObject *result = PySet_New(self);
...
tmp = PyObject_CallMethod(result, "intersection_update", "(O)", other);
... Similar logic can be found in the Python 3 code as well. For large dicts (~10 000 000 keys), it takes a significant amount of time to copy the keys into a new set, and also wastes a lot of memory. This issue is amplified when the intersection operation is performed many times. This implementation uses O(len(self)) time and space, while it is expected to use O(min(len(self), len(other)) time and space. Solution 1: Manually copy the keys of the dict into a set, and perform all intersection operations on that set. However, still wastes lots of memory. Solution 2 (Recommended): Port the intersection logic from the set class to the dict view class. |
FWIW, sets and dicts convert back and forth very efficiently (the hash values are reused and the collection is presized with just a single memory allocation). Also, the sets and dicts have shared pointers to the underlying data (which is often much bigger than the containers that refers to the data). Also, the use case of having very large dicts with repeated intersection operations isn't common enough to warrant any significant code expansion (your solution two). I don't understand your "solution one". PySet_New(self) does copy references to the keys from the dict to a set and the intersection_update() does perform operations on that set. Lastly, since there was no clear example given, there isn't an opportunity to look at it to see if the code would have been better designed to simply use sets for the set operations and keep the dict around to the key to value transformations (in other words, the current toolset may already afford some way to accomplish the goal). |
Here are some tests that I ran: Using current naive viewkeys intersection: $ python -m timeit -n 100 -s "d = dict.fromkeys(xrange(10000000), 0)" "d.viewkeys() & {0}"
100 loops, best of 3: 447 msec per loop Nearly half a second per iteration is unacceptably slow. Solution 1 from my previous comment: $ python -m timeit -n 100 -s "d = dict.fromkeys(xrange(10000000), 0); s = set(d)" "s & {0}"
100 loops, best of 3: 0.391 usec per loop This solution caches the keys of the dict in a set, then performs all intersections on that set. The workaround that I ended up using: $ python -m timeit -n 100 -s "d = dict.fromkeys(xrange(10000000), 0)" "{item for item in {0} if item in d}"
100 loops, best of 3: 0.629 usec per loop This solution just looks up each value in the set to see if it is also in the dict using a set comprehension. Of course, in the set comprehension, the smaller of the dict/set should be iterated over to satisfy the O(min(len(d), len(s))) condition as specified in https://wiki.python.org/moin/TimeComplexity. def efficient_intersection(smaller_dict_or_set, bigger_dict_or_set):
if len(bigger_dict_or_set) < len(smaller_dict_or_set):
bigger_dict_or_set, smaller_dict_or_set = smaller_dict_or_set, bigger_dict_or_set
return {item for item in smaller_dict_or_set if item in bigger_dict_or_set} In conclusion, porting over the set intersection logic to viewkeys would be the ideal solution. It avoids wasting memory like in the set cache solution, and I am sure the C implementation will be much faster than my workaround of manually performing an intersection using set comprehensions. My view is that dict.viewkeys() & set(...) should be as performant as the syntax suggests. |
Do you encounter an actual real-world use case or is this just a theoretical issue? ISTM, the performance is already good enough for most practical applications. |
I am trying to implement the following algorithm: http://blog.faroo.com/2012/06/07/improved-edit-distance-based-spelling-correction/ This algorithm is used for fast nearest neighbor queries for spell correction. Given a corpus of strings, for each string s, I generate all subsequences (Note: subsequence, not substring) of that string length (len(n) - k), call that subsequence t, where k is a tunable parameter. Then, I create a dict mapping the subsequences t to the original string s. This process creates a very large dict from a relatively small corpus. For example, my corpus of ~500k strings blew up to a dict of ~10M keys, with k=3. Then, for a query string q, you generate all subsequences of (len(q) - k), and perform an intersection with the keys of the dict. Then, the values of the dict corresponding to the key intersection will be possible misspelling of the query string q. In pseudo-python: def get_subseq(s: str, k: int):
returns all subsequences of s of length len(s) - k
def build_dict(corpus: List[str], k: int):
d = defaultdict(list)
for orig_str in corpus:
for subseq in get_subseq(orig_str, k):
d[subseq].append(orig_str)
return d
def query(d: dict, q: str, k:int):
q_subseq = set(get_subseq(q, k))
intersection = d.viewkeys() & q_subseq # this is slow when len(d) is large
return [orig_str for k in intersection for orig_str in d[k]]
By exposing an intersection interface, a certain degree of performance is expected by the user. It took me a considerable amount of debugging to realize that the dict.viewkeys() intersection was the performance bottleneck. I finally decided to dig into the CPython source code before discovering the root cause of the issue. While I am familiar with C and found the issue relatively quickly, for those that aren't, it should not be necessary to dig into the source code for a mainstream programming language like Python. I am currently looking at how to potentially fix this in the CPython repository. I may submit a patch in the near future if I find a good solution. |
Attached is a patch that I had been working on. Could you please review and provide me with some feedback? Thanks! |
ping |
Is there anything helpful I can do to help close this issue? Maybe convert it into a github PR? Since Raymond asked for cases where this issue is a problem, I'll add the case that brought me here. I have an application where I need to do thousands of intersections of multisets. I started with the collections.Counter object, but the current intersection method is too slow. As Counter is a subclass of dict, I thought that I could significantly speed it up by taking advantage of keys intersections. I've been able to verify that if key intersection was roughly similar in speed to set intersection, than that would be very helpful. However, the current speed of key intersection does not make that practicable. I can, of course, cast the keys to sets before intersecting, but as David points out that casting is what is taking significant time. slow dictionary intersection for becoming larger dicts is becoming a problem for me |
Yes, that would be nice. Also, please verify that the test cases cover all the code paths. |
Hi Raymond, I've created a PR here: #7696, and I reached out to Daniel Hsu and he has given his blessing to help push this forward. I think this is ready for your review. Please let me know if there's anything else I can do. Thanks, Forest |
Thanks David and Forest. |
I am puzzled why we still call set.intersection_update with empty set? |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: