Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement high performance rolling_rank #9481

Open
PH82 opened this issue Feb 13, 2015 · 17 comments

Comments

Projects
None yet
10 participants
@PH82
Copy link

commented Feb 13, 2015

xref SO issue here

Im looking to set the rolling rank on a dataframe. Having posted, discussed and analysed the code it looks like the suggested way would be to use the pandas Series.rank function as an argument in rolling_apply. However on large datasets the performance is particularly poor. I have tried different implementations and using bottlenecks rank method orders of magnitude faster, but that only offers the average option for ties. It is also still some way off the performance of rolling_mean. I have previously implemented a rolling rank function which monitors changes on a moving window (in a similar way to algos.roll_mean I believe) rather that recalculating the rank from scratch on each window. Below is an example to highlight the performance, it should be possible to implement a rolling rank with comparable performance to rolling_mean.

python: 2.7.3
pandas: 0.15.2
scipy: 0.10.1
bottleneck: 0.7.0

rollWindow = 240
df = pd.DataFrame(np.random.randn(100000,4), columns=list('ABCD'), index=pd.date_range('1/1/2000', periods=100000, freq='1H'))
df.iloc[-3:-1]['A'] = 7.5
df.iloc[-1]['A'] = 5.5

df["SER_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankOnSeries)
 #28.9secs (allows competition/min ranking for ties)

df["SCIPY_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankSciPy)
 #70.89secs (allows competition/min ranking for ties)

df["BNECK_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankBottleneck)
 #3.64secs (only provides average ranking for ties)

df["ASRT_RK"] = pd.rolling_apply(df["A"], rollWindow, rollingRankArgSort)
 #3.56secs (only provides competition/min ranking for ties, not necessarily correct result)

df["MEAN"] = pd.rolling_mean(df['A'], window=rollWindow)
 #0.008secs

def rollingRankOnSeries (array):
    s = pd.Series(array)
    return s.rank(method='min', ascending=False)[len(s)-1]

def rollingRankSciPy (array):
     return array.size + 1 - sc.rankdata(array)[-1]

def rollingRankBottleneck (array):
    return array.size + 1 - bd.rankdata(array)[-1]

def rollingRankArgSort (array):
    return array.size - array.argsort().argsort()[-1]
```python

I think this is likely to be a common request for users looking to use pandas for analysis on large datasets and thought it would be a useful addition to the pandas moving statistics/moments suite?
@jreback

This comment has been minimized.

Copy link
Contributor

commented Feb 13, 2015

I have added it to the master issue. Its actuallly pretty straightforward to do this. Note that all that is necessary is a new method, rolling_idxmax, which is essentially rolling_max but it tracks the index as well; so all that is needed is a modification to algos.roll_max2 to also record the index.

want to do a pull-request here?

@jreback jreback added this to the 0.17.0 milestone Feb 13, 2015

@PH82

This comment has been minimized.

Copy link
Author

commented Feb 16, 2015

Yes that sounds like what I was thinking, pandas.stats.moments.rolling_rank with the same arguments as rank, allowing the user to select a competition method (min, avg....) and whether ranks should be ascending. Not sure I'm familiar enough with the pandas code yet to make any modifications my self to implement the best solution in the environment for this though. Happy to test though.

@cing

This comment has been minimized.

Copy link

commented Apr 14, 2015

I got rolling_indmax and rolling_indxmin working just as @jreback said, recording the index. No extra memory allocation or computation on top of the normal algorithm, just an additional return array from roll_max2/roll_min2 holding the indices. At the PyCon sprint @jreback suggested I return an int64 from the Cython code to store the indices, but I'm pretty sure it has to be a float64 due to possible NaN's, right?

Also, @PH82 requested rolling_idxaverage but is this really necessary?

@shoyer

This comment has been minimized.

Copy link
Member

commented Apr 14, 2015

@cing you can use -1 to mark missing values values as long as you only use positive indices otherwise (that's pretty standard in pandas)

@PH82

This comment has been minimized.

Copy link
Author

commented Jul 21, 2015

@cing, The average I mention is to determine how to rank ties. Common methods for how to rank ties are:

Values Rank (Tie Method=Min) Rank (Tie Method=Max) Rank (Tie Method=Avg)
3 4 4 4
1 1 1 1
2 2 3 2.5
4 5 5 5
2 2 3 2.5

@PH82 PH82 closed this Jul 21, 2015

@PH82 PH82 reopened this Jul 21, 2015

@cing

This comment has been minimized.

Copy link

commented Jul 21, 2015

I'll have to take a look at this again, some of the test cases didn't succeed for rolling_idxmax and rolling_idxmin. I didn't implement a rolling_idxavg though, is that truly useful?

@PH82

This comment has been minimized.

Copy link
Author

commented Jul 21, 2015

I would say so, for my immediate purposes no, but it is a valid tie method
and quite likely that it would be used in the future, in fact I can think
of scenarios where I would use it. The rolling_rank I think should
implement as much of the non-rolling method as possible.

@Dmoonleo

This comment has been minimized.

Copy link

commented May 26, 2016

I am glad to see there is an issue ticket already, for this rolling rank.
May I know the status of this issue? How could I help on this?

@jreback

This comment has been minimized.

Copy link
Contributor

commented May 26, 2016

@Dmoonleo the labels indicate the status, meaning that its open and unclaimed. you are welcome to submit a pull-request.

@mcherkassky

This comment has been minimized.

Copy link

commented Aug 23, 2017

Is this going to be implemented?

@TomAugspurger

This comment has been minimized.

Copy link
Contributor

commented Aug 23, 2017

@mcherkassky you're welcome take take a crack at it. Let us know if you need help getting started.

@cing

This comment has been minimized.

Copy link

commented Aug 23, 2017

I still have the code I wrote back from the good ol' days, but it's worth taking a fresh look because window.pyx has been touched a few times. As a novice, I'd just comment that the windowed rolling algorithm is a bit more complicated than the "novice" tag might suggest! Nothing teamwork can't crack though.

@mcherkassky

This comment has been minimized.

Copy link

commented Aug 23, 2017

does anyone have an algorithm reference to implement the rolling rank? it doesn't seem trivial

i'm happy to give it a stab but i don't really follow the discussion above about rolling_idxmax/rolling_idxmin and how that helps get the rolling rank

@sluo1989

This comment has been minimized.

Copy link

commented Dec 7, 2017

Is there anyone still working on the efficient implementation of rolling rank?

@jreback

This comment has been minimized.

Copy link
Contributor

commented Dec 7, 2017

no would love to have this!

@VanyTang

This comment has been minimized.

Copy link

commented Dec 11, 2018

I propose an algorithm to calculate rolling_rank efficiently.

Suppose window size is fixed, and rank is defined when the window number is sorted in monotone increasing.

We can use a balanced tree to store window data, as it only takes O(logM) for insert, delete and finding operations, where M is the window size.

In the procedure of insert, delete operation, we should maintain a field called size of each node representing the count of nodes of this sub-tree, which will be used in calculating rank. If a balanced tree maintain the size field initially (such as Size-Balanced Tree or Weight Balanced Tree), it would be better.

Due to numbers are organized orderly in a balanced tree, when we want to get the rank of a number in this window, we can just find the corresponding node starting from the tree root, and sum the size of all the left child tree through the finding path. The sum is the rank. This operation is also O(logM).

From the above, when calculating a rolling rank in a length N sequence where window size is M, the total time complexity is O(NlogM). It's much better than that of the naive algorithm (sort and get the rank in each window) O(NMlogM). This algorithm could be M times fast than the original one.

@bmpalatiello

This comment has been minimized.

Copy link

commented Jan 23, 2019

If there is still interest, my workaround for this is:

import bottleneck as bk

norm_rank = bk.move_rank(x.values, n, axis=0)
denorm = (((norm_rank + 1) / 2) * (n - 1)) + 1
descend = (n - denorm) + 1

The bk.move_rank function returns a normalized rank between -1 and 1. So taking the normalized rank and reverse engineering it to return the actual rank. Then essentially making it descending=True. Obviously the only potential downside is it only provides average ranking for ties.

Running it on my small laptop:

window = 240
x = pd.DataFrame(np.random.randn(100000,4), columns=list('ABCD'), index=pd.date_range('1/1/2000', periods=100000, freq='1H'))

# Original rollingRankBottleneck above
6.04 s ± 302 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# This version
411 ms ± 24.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.