-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set(range(100000)).difference(set()) is slow #52931
Comments
set.difference(s), when s is also a set, basically does:: res = set()
for elem in self:
if elem not in other:
res.add(elem) This is wasteful when len(self) is much greater than len(other): $ python -m timeit -s "s = set(range(100000)); sd = s.difference; empty = set()" "sd(empty)"
100 loops, best of 3: 12.8 msec per loop
$ python -m timeit -s "s = set(range(10)); sd = s.difference; empty = set()" "sd(empty)"
1000000 loops, best of 3: 1.18 usec per loop Here's a patch that compares the lengths of self and other before that loop, and if len(self) is greater, swaps them. The new timeit results are: $ python -m timeit -s "s = set(range(100000)); sd = s.difference; empty = set()" "sd(empty)"
1000000 loops, best of 3: 0.289 usec per loop
$ python -m timeit -s "s = set(range(10)); sd = s.difference; empty = set()" "sd(empty)"
1000000 loops, best of 3: 0.294 usec per loop |
Oops, obvious bug in this patch. set('abc') - set('bcd') != set('bcd') - set('abc'). I'll see if I can make a more sensible improvement. See also <http://bugs.python.org/issue8425\>. Thanks dickinsm on #python-dev. |
Ok, this time test_set* passes :) Currently if you have large set and small set the code will do len(large) lookups in the small set. When large is >> than small, it is cheaper to copy large and do len(small) lookups in large. On my laptop a size difference of 4 times is a clear winner for copy+difference_update over the status quo, even for sets of millions of entries. For more similarly sized sets (even only factor of 2 size difference) the cost of allocating a large set that is likely to be shrunk significantly is greater than the benefit. So my patch only switches behaviour for len(x)/4 > len(y). This patch is complementary to the patch in bpo-8425, I think. |
I have two problems with this proposal:
Programs that suffer from poor large_set.difference(small_set) performance can be rewritten as large_set_copy = large_set.copy(); large_set_copy.difference_updste(small_set) or even simply as large_set.difference_updste(small_set) if program logic allows it. |
Regarding memory, good question... but this patch turns out to be an improvement there too. This optimisation only applies when len(x) > len(y) * 4. So the minimum size of the result is a set with 3/4 of the elems of x (and possibly would be a full copy of x anyway). So if you like this optimisation is simply taking advantage of the fact we're going to be copying almost all of these elements anyway. We could make it less aggressive, but large sets are tuned to be between 1/2 and 1/3 empty internally anyway, so 1/4 overhead seems reasonable. Also, because this code immediately makes the result set be about the right size, rather than growing it one element at a time, the memory consumption is actually *better*. I'll attach a script that demonstrates this; for me it shows that large_set.difference(small_set) [where large_set has 4M elems, small_set has 100] peaks at 50MB memory consumption without my patch, but only 18MB with. (after discounting the memory required for large_set itself, etc.) |
It's a space/time tradeoff. There's nothing wrong about that.
So what? It's just a matter of choosing reasonable settings. There are other optimization heuristics in the interpreter. The optimization here looks ok to me. |
The current patch gives much smaller benefits than the originally posted benchmarks, although they are still substantial: $ ./python -m timeit -s "a = set(range(100000)); sd = a.difference; b = set(range(1000))" "sd(b)"
- before: 5.56 msec per loop
- after: 3.18 msec per loop
$ ./python -m timeit -s "a = set(range(1000000)); sd = a.difference; b = set(range(10))" "sd(b)"
- before: 67.9 msec per loop
- after: 41.8 msec per loop |
Antoine: Thanks for the updated benchmark results! I should have done that myself earlier. |
Will look at this when I get back to the U.S. |
On 2010-05-17 rhettinger wrote:
Ping! This patch (set-difference-speedup-2.diff) has been sitting around for a fair few weeks now. It's a small patch, so it should be relatively easy to review. It makes a significant improvement to speed and memory in one case (which we have encountered and worked around in bzr), and has no significant downside to any other cases. Thanks :) |
Andrew, This issue is somewhat similar to bpo-8425. I may be reading too much into the "priority" field, but it looks like Raymond would like to review bpo-8425 first. You can help by commenting on how the two issues relate to each other. I believe the two are complementary, but I did not attempt to apply both patches. (The patch still applies with little fuzz.) |
I'll be looking at it shortly. Py3.2 is still aways from release so there is no hurry. |
Alexander: yes, they are complementary. My patch improves set.difference, which always creates a new set. bpo-8425 on the other hand improves in-place difference (via the -= operator or set.difference_update). Looking at the two patches, my patch will not improve in-place difference, and bpo-8425's patch will not improve set.difference. So they are complementary. |
I would consider reviewing and possibly apply this change, but I don't want to invade anyone's territory. |
I don't think there would be any invasion. I think the patch is simple enough, and seems to provide a nice benefit. |
Please leave this for me. |
Raymond, unless you object, I'd like to commit this before beta1. |
Thx. |
Modified patch committed in r86905. Thanks! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: