Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Save original index and remap after function completes #61116

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

Jeffrharr
Copy link

@Jeffrharr Jeffrharr commented Mar 13, 2025

Note: I'm new to this project, so this is my first PR.

Saves the index for SeriesNLargest at algorithm start and resets it before returning. This fixes performance issues when the index has many duplicate values.

Results:

  • The original statistics can be viewed in the original ticket, but slow_df was several ms. Note that the results in the ticket were not from my development machine and specific timings differ.
In [4]: import pandas as pd
   ...: import numpy as np
   ...: 
   ...: N = 1500
   ...: N_HALF = 750
   ...: 
   ...: slow_df = pd.DataFrame({'a':  np.random.rand(N)}, index=np.concatenate([[1] * N_HALF, np.arange(N_HALF)]))
   ...: print("slow_df")
   ...: %timeit slow_df['a'].nlargest()
   ...: 
   ...: fast_df = pd.DataFrame({'a': np.random.rand(N)})
   ...: print("fast_df")
   ...: %timeit fast_df['a'].nlargest()

slow_df
427 μs ± 11.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
fast_df
420 μs ± 5.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

Tests

The existing tests should cover this unless we want to add specific tests via the asv_bench.

Addendum

I also modified the call to sort to use sort(kind="stable") to get consistent ordering which is what is currently happening in the equivalent Frame method (it was using kind=mergesort which is equivalent to kind=stable, but kept for portability). I can remove this -- it may be better in another PR.
https://numpy.org/doc/stable/reference/generated/numpy.sort.html#numpy.sort

@Jeffrharr Jeffrharr marked this pull request as ready for review March 13, 2025 22:13
@Jeffrharr Jeffrharr changed the title Save original index and remap after function completes. Bug: Save original index and remap after function completes Mar 14, 2025
@rhshadrach rhshadrach added Performance Memory or execution speed performance Filters e.g. head, tail, nth labels Mar 26, 2025
Copy link
Member

@rhshadrach rhshadrach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I'm seeing a performance regression when the index does not contain duplicates. Can we do this conditionally.

# Save index and reset to default index to avoid performance impact
# from when index contains duplicates
original_index: Index = self.obj.index
cur_series = self.obj.reset_index(drop=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of renaming cur_series -> noindex and final_series -> result

Copy link
Author

@Jeffrharr Jeffrharr Mar 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about default_index as it more accurately represents that the index is reset to default 1..n.

@Jeffrharr
Copy link
Author

Jeffrharr commented Mar 26, 2025

Thanks for the PR! I'm seeing a performance regression when the index does not contain duplicates. Can we do this conditionally.

I appreciate the review!

Can you confirm the performance regression numbers that you are seeing? Note that the timings in the original ticket were from a different machine than the one I'm working on.

ASV Benchmarks appear similar enough to be reasonable in the standard case.

-- Original Function --

[37.66%] ··· series_methods.NSort.time_nlargest                              ok
[37.66%] ··· ======= ==========
               keep            
             ------- ----------
              first   3.59±0ms 
               last   3.70±0ms 
               all    3.84±0ms 
             ======= ==========

[37.71%] ··· series_methods.NSort.time_nsmallest                             ok
[37.71%] ··· ======= ==========
               keep            
             ------- ----------
              first   3.26±0ms 
               last   2.94±0ms 
               all    3.31±0ms 
             ======= ==========

-- Function with my changes --

[37.66%] ··· series_methods.NSort.time_nlargest                              ok
[37.66%] ··· ======= ==========
               keep            
             ------- ----------
              first   3.71±0ms 
               last   3.41±0ms 
               all    3.60±0ms 
             ======= ==========

[37.71%] ··· series_methods.NSort.time_nsmallest                             ok
[37.71%] ··· ======= ==========
               keep            
             ------- ----------
              first   2.86±0ms 
               last   2.58±0ms 
               all    3.20±0ms 
             ======= ==========

@Jeffrharr Jeffrharr requested a review from rhshadrach March 26, 2025 21:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Filters e.g. head, tail, nth Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PERF: Surprisingly slow nlargest with duplicates in the index
2 participants