Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: Allow groupby transform with numba engine to be fully parallelizable #36240

Merged
merged 13 commits into from
Sep 13, 2020

Conversation

mroeschke
Copy link
Member

@mroeschke mroeschke commented Sep 9, 2020

  • tests added / passed
  • passes black pandas
  • passes git diff upstream/master -u -- "*.py" | flake8 --diff
  • whatsnew entry

New performance comparison with 10k groups

In [1]: df_g = pd.DataFrame({'a': range(10**4), 'b': range(10**4), 'c': range(10**4)})

In [2]: df_g.groupby('a').transform(lambda x: x + 1)
Out[2]:
          b      c
0         1      1
1         2      2
2         3      3
3         4      4
4         5      5
...     ...    ...
9995   9996   9996
9996   9997   9997
9997   9998   9998
9998   9999   9999
9999  10000  10000

[10000 rows x 2 columns]

In [3]: %timeit df_g.groupby('a').transform(lambda x: x + 1)
9.48 s ± 885 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [4]: def f(values, index):
   ...:     return values + 1
   ...:

In [5]: df_g.groupby('a').transform(f, engine='numba', engine_kwargs={'parallel': True})
Out[5]:
            b        c
0         1.0      1.0
1         2.0      2.0
2         3.0      3.0
3         4.0      4.0
4         5.0      5.0
...       ...      ...
9995   9996.0   9996.0
9996   9997.0   9997.0
9997   9998.0   9998.0
9998   9999.0   9999.0
9999  10000.0  10000.0

[10000 rows x 2 columns]

In [6]: %timeit df_g.groupby('a').transform(f, engine='numba', engine_kwargs={'parallel': True})
4.41 ms ± 34.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@mroeschke mroeschke added Apply Apply, Aggregate, Transform Groupby Performance Memory or execution speed performance labels Sep 9, 2020
@mroeschke mroeschke added this to the 1.2 milestone Sep 9, 2020
@jbrockmendel
Copy link
Member

How does this compare to non-parallel numba?

@mroeschke
Copy link
Member Author

How does this compare to non-parallel numba?

In [1]: df_g = pd.DataFrame({'a': range(10**4), 'b': range(10**4), 'c': range(10**4)})

In [2]: In [4]: def f(values, index):
   ...:    ...:     return values + 1
   ...:

In [3]: df_g.groupby('a').transform(f, engine='numba')
Out[3]:
            b        c
0         1.0      1.0
1         2.0      2.0
2         3.0      3.0
3         4.0      4.0
4         5.0      5.0
...       ...      ...
9995   9996.0   9996.0
9996   9997.0   9997.0
9997   9998.0   9998.0
9998   9999.0   9999.0
9999  10000.0  10000.0

[10000 rows x 2 columns]

In [4]: %timeit df_g.groupby('a').transform(f, engine='numba')
5.46 ms ± 38.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@jreback
Copy link
Contributor

jreback commented Sep 9, 2020

so a little better :)

Copy link
Contributor

@jreback jreback left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add an asv which covers this (or do we have an existing one)?

@@ -1362,13 +1358,23 @@ def _transform_general(
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):

if maybe_use_numba(engine):
if not callable(func):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this check should actually be in _transform_with_numba to keep DRY (you have it above as well)

pandas/tests/groupby/transform/test_numba.py Show resolved Hide resolved
@jreback jreback merged commit 65074db into pandas-dev:master Sep 13, 2020
@jreback
Copy link
Contributor

jreback commented Sep 13, 2020

thanks @mroeschke

@mroeschke mroeschke deleted the perf/numba_groupby_transform branch September 13, 2020 17:25
@mroeschke mroeschke mentioned this pull request Sep 15, 2020
3 tasks
kesmit13 pushed a commit to kesmit13/pandas that referenced this pull request Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apply Apply, Aggregate, Transform Groupby Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants