New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make statistics.median run in linear time #65791
Comments
The statistics module currently contains the following comment: "FIXME: investigate ways to calculate medians without sorting? Quickselect?" This is important, because users expect standard library functions to use state of the art implementations, and median by sorting has never been that. There are many good linear time alternatives, the classical median-of-k algorithm was posted to the mailing list in a nice version by David Eppstein in 2002 1. The fastest method in practice is probably Quick Select in the Floyd-Rivest version 2 which is similar to quick sort. These algorithms also have the feature of letting us select any k-th order statistic, not just the median. This seems conceptually a lot simpler than the current median/median_low/median_high split. However, sticking with the current api, a new implementation would have to support calculating a median as the average of n//2 and (n+1)//2'th order statistics. This could be implemented either by calling the select function twice, or by implementing a multi-select algorithm, which is also a well studied problem 3. I'll be happy to contribute code, or help out in any other way. |
I have written some proof of concept code here 1, I would appreciate you commenting on it, before I turn it into a patch, as I haven't contributed code to Python before. I have tried to write it as efficiently as possible, but it is of course possible that the c-implemented |
On Wed, May 28, 2014 at 02:43:29PM +0000, Thomas Dybdahl Ahle wrote:
Thanks Thomas! I will try to look at this over the weekend (today is
My quick-and-dirty tests suggest that you need at least 10 million items |
Yeah, I remember I tried to improve the performance of the median in the past using median-of-k algorithm. But too bad I could not beat sorted C function efficiency. Maybe you have a better luck! |
If you have a good, realistic test set, we can try testing quick-select vs sorting. If it's still not good, I can also reimplement it in C. |
To accept contributions, we need a signed (possibly electronically) contribution form. |
I suggest this needs a measurable goal, like "minimize expected-case time" or "minimize worst-case time". "Use a state of the art implementation" isn't a measurable goal - it's an abstract wish with no measurable merit on its own ;-) Note that I wrote the "median-of-k" code referenced in the first post here (it was in reply to David Eppstein). Even then it was hard to beat sort() + index. It's harder now, and for an important reason: Python's _current_ sort can exploit many kinds of partial order, and can often complete in linear time. This makes picking "typical" input for timing crucial. If you want to skew it to put sort() + index at maximum disadvantage, use only shuffled (random order) pairwise-unequal inputs. But most streams of data do have some kinds of partial order (which is why the newer sort() implementation has been so successful), and "typical" is a much tougher thing to capture than "shuffled". |
I think "minimize expected-case time" is a good goal. If we wanted "minimize worst-case time" we would have to use k-means rather than quickselect. My trials on random data, where sort arguably has a disadvantage, suggests sorting is about twice as fast for most input sizes. With pypy quick-select is easily 5-10 times faster, which I take as a suggestion that a C-implementation might be worth a try. For designing a realistic test-suite, I suppose we need to look at what tasks medians are commonly used for. I'm thinking median filters from image processing, medians clustering, robust regressing, anything else? |
I would encourage you to consult the devguide, prepare a patch, and upload it here so that we can use rietveld to review it and add inline comments. |
I've run some performance tests on six variations of the O(N) select algorithm, based on Tim Peters' and Thomas Ahle's code, comparing them to the naive O(N log N) "sort first" algorithm, and sorting is consistently faster up to the limit I tested. About the tests I ran:
My test suite is attached if anyone wants to critique it or run it themselves. Results: == Single call mode == My conclusions from single calls: Thomas' select() and Tim's select7() as pure Python functions are too slow for serious contention. [Aside: I wonder how PyPy would go with them?] There's not much difference in performance between the various median-of-median-of-k functions for larger k, but it seems to me that overall k=47 is marginally faster than either k=23 or k=97. Overall, sorting is as good or better (and usually *much better*) than any of the pure-Python functions for the values of N tested, at least on my computer. C versions may be worth testing, but I'm afraid that is beyond me. Thomas' select2 using dual pivots seems like the most promising. There are a couple of anomalous results where select2 unexpectedly (to me!) does much, much better than sorting, e.g. for N=9 million. Pure chance perhaps? The overall trend seems to me to suggest that a pure-Python version of select2 may become reliably faster than sorting from N=10 million or so, at least with random data on my computer. YMMV, and I would expect that will non-random partially sorted data, the results may be considerably different. == Average of three calls mode == In this set of tests, each function is called three times on the same set of data. As expected, once the list is sorted on the first call, sorting it again on the second call is very fast, and so the "sort" column is quite similar to the previous set of tests. What I didn't expect is just how badly the various other selection functions cope with being called three times on the same list with different ranks. The extreme case is Thomas' select() function. Total time to call it three times on a list of 11 million items is 342 seconds (3*114), compared to 10 seconds to call it once. I expected that having partially ordered the data on the first call, the second and third calls would take less time rather than more. Was I ever wrong. Unless my analysis is wrong, something bad is happening here, and I don't know what it is. [Aside: this suggests that, unlike sort() which can take advantage of partially ordered data to be more efficient, the other selection functions are hurt by partially ordered data. Is this analogous to simple versions of Quicksort which degrade to O(N**2) if the data is already sorted?] What is abundantly clear is that if you want to make more than one selection from a list, you ought to sort it first. Given these results, I do not believe that a pure-python implementation of any of these selection algorithms can be justified on performance grounds for CPython. Thanks to Tim Peters and Thomas Ahle for their valuable assistance in writing the selection functions in the first place. |
I ran the script (modified very slightly for python 2 support) under PyPy 2.3: $ pypy select.py
== Single call mode ==
N sort select7 select23 select47 select97 select select2
-------- -------- -------- -------- -------- -------- -------- --------
5000 0.000 0.010 0.000 0.000 0.000 0.003 0.003
10000 0.000 0.001 0.001 0.001 0.001 0.000 0.000
50000 0.002 0.007 0.004 0.002 0.002 0.000 0.000
100000 0.004 0.010 0.004 0.004 0.005 0.000 0.001
500000 0.026 0.030 0.019 0.020 0.024 0.004 0.004
1000000 0.057 0.052 0.037 0.039 0.044 0.007 0.004
2000000 0.113 0.092 0.069 0.078 0.087 0.017 0.014
3000000 0.176 0.135 0.109 0.119 0.136 0.024 0.013
4000000 0.243 0.180 0.137 0.162 0.177 0.024 0.022
5000000 0.298 0.225 0.176 0.196 0.221 0.035 0.024
6000000 0.373 0.266 0.207 0.236 0.266 0.051 0.038
7000000 0.439 0.313 0.248 0.277 0.311 0.054 0.038
8000000 0.506 0.360 0.282 0.317 0.356 0.039 0.039
9000000 0.566 0.404 0.315 0.352 0.406 0.055 0.068
10000000 0.626 0.445 0.349 0.395 0.444 0.065 0.046
11000000 0.697 0.492 0.387 0.439 0.490 0.059 0.086
Total elapsed time: 0.96 minutes
$ pypy select.py 🕙 57.7s
== Average of three calls mode ==
N sort select7 select23 select47 select97 select select2
-------- -------- -------- -------- -------- -------- -------- --------
5000 0.000 0.010 0.001 0.001 0.004 0.003 0.002
10000 0.000 0.005 0.001 0.001 0.002 0.000 0.000
50000 0.002 0.014 0.006 0.006 0.008 0.002 0.001
100000 0.005 0.018 0.012 0.011 0.016 0.002 0.001
500000 0.026 0.071 0.051 0.060 0.076 0.019 0.007
1000000 0.055 0.135 0.102 0.124 0.148 0.046 0.013
2000000 0.115 0.257 0.208 0.244 0.291 0.092 0.027
3000000 0.181 0.396 0.301 0.347 0.383 0.097 0.044
4000000 0.243 0.530 0.417 0.485 0.559 0.127 0.050
5000000 0.312 0.656 0.522 0.570 0.688 0.172 0.072
6000000 0.377 0.789 0.610 0.688 0.772 0.215 0.072
7000000 0.448 0.927 0.715 0.850 0.978 0.315 0.087
8000000 0.510 1.049 0.812 0.967 1.193 0.403 0.108
9000000 0.575 1.191 0.929 1.088 1.241 0.462 0.107
10000000 0.641 1.298 1.070 1.217 1.310 0.470 0.128
11000000 0.716 1.464 1.121 1.343 1.517 0.401 0.147
Total elapsed time: 2.21 minutes |
I tried the same with a Cython compiled version of select.py in the latest CPython 3.5 build. It pretty clearly shows that select2 is pretty much always faster than sorting, by a factor of 2-5x or so. I'll also attach the annotated source file that Cython generates. *** CPython 3.5 (de01f7c37b53) == Single call mode == *** Cython / CPython 3.5 == Single call mode == |
Here's also the pathological "average of three calls" case. As Steven suggests, it shows that select() suffers quite heavily (algorithmically), but select2() also suffers enough to jump up to about 2/3 of the runtime of sorting (so it's still 1/3 faster even here). This sounds like select2 should be much faster on random data (run 1) and about as fast on sorted data (runs 2+3). Not unexpected, given the algorithmical characteristics of Timsort. == Average of three calls mode == |
Updating the type declaration file to remove the dependency on the list builtin and allow arbitrary containers. The test code has this dependency (calls a.sort()), but the current statistics module in the stdlib does not (calls sorted()). Timings do not change, at least not more than you would expect by randomisation (i.e. repeated runs go up and down within the same bounds). Note that the timings are still specific to lists and would be higher for other containers. |
in the case of the median you can archive similar performance to a multiselect by simply calling min([len(data) // 2 + 1]) for the second order statistic which you need for the averaging of even number of elements. maybe an interesting datapoint would be to compare with numpys selection algorithm which is a intromultiselect (implemented in C for native datattypes). |
I don't know if it's worth the overhead to implement a multiselect, given we only expose a median function. I've rewritten select2 to be intro, just falling back on sorting. This doesn't appear to degrade the performance. I also added np.median to the test-suite. And it is indeed pretty snappy. Though not more than select2 under pypy. There is a discussion here numpy/numpy#1811 == Single call mode == |
for median alone a multiselect is probably overkill (thats why I mentioned the minimum trick) but a selection algorithm is useful on its own for all of python and then a multiselect should be considered. Also just to save numpys honor, you are benchmarking python list -> numpy array conversion and not the actual selection in your script with the numpy comparison. The conversion is significantly slower than the selection itself. Also select2b is inplace while np.partition is out of place. Repeated inplace selection typically gets faster as the number of required swaps goes down and can even be constant in time if the requested value does not change. |
On Sat, Jun 07, 2014 at 01:02:52PM +0000, Julian Taylor wrote:
I like the idea of a select and/or multiselect for 3.5. As a |
This is my first attempt to contribute. I have used the Median of Medians with n/5 groups of 5 elements each. It has linear time complexity. But, still I am not sure about it, because constants may be high. Therefore, if anyone can please benchmark it with other methods or check that can this method be a viable solution to finding median more efficiently? I will like it to be checked before I make attempts to convert it into a patch. |
This method can also be further used to implement multiselect functionality with slight changes in the nth-order function. |
the median of median of 5 is quite significantly slower than a quickselect. |
Everyone here should heed Tim's comments. The statistics module already has a history of suboptimal decisions made in the service of theoretical perfection (i.e. mean(seq) is over a 100x slower than fsum(seq)/len(seq)). While variants of quick-select have a nice O(n) theoretical time, the variability is very-high and has really bad worst cases. The existing sort() is unbelievably fast, has a reasonable worst case, exploits existing order to great advantage, has nice cache performance, and has become faster still with the recently added type-specialized comparisons. This sets a very high bar for any proposed patches. |
How does the performance change with this patch? Quick-select is a nice idea in theory, but unless it is written in C, it is unlikely to beat sorting the list unless you have HUGE data sets. Its been nearly four years since I last did some benchmarks, but at the time there was no comparison, sorting was clearly much better (although Stefan found that select was faster than sorting). In particular, all the quickselect versions I tested suffered catastrophic performance slowdowns if the data was already sorted: anything from double the time to ten times as much time. |
For CPython, I agree with the opinion that finding a median value with Tim sort(C version) is faster than quickselect algorithm with pure python. The advantage of implementing this method by quickselect with pure python will be for pypy or any other python compatible compiler. Considering the role that CPython provides for the standard library to other compatible compilers, it is worth considering from a performance point of view with a compatible compiler. However, for CPython, we have to take a performance hit. Here is the benchmark result with pypy3 on MacOS N sort select7 select23 select47 select97 select select2 PR6715 |
From the discussion is seems that there is consensus that only a C implementation of select has a chance of beating timsort. So it's a matter of producing a PR with select in C and some convincing benchmark results. That hasn't happened in a few years. If nobody objects I will close this issue. That doesn't mean the problem can't be revisited in the future, just that this effort has been abandoned and we don't consider there to be an open performance bug. |
I'm currently looking at different algorithms we might use for a C implementation and will be doing some tests in the coming weeks and should have a PR in the coming months, so keeping it open would be nice. |
@SuperZooper3 Yours is a whole new research project. I would start fresh with a new issue (you could mention this one and add a link) but in my view there is no need to make anyone reviewing your patch read through this discussion as well. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: