Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: fix merging on datetimelike columns to not use object-dtype factorizer #53231

Merged
merged 8 commits into from May 31, 2023

Conversation

jorisvandenbossche
Copy link
Member

See #53213 (comment) for context.

Running the added benchmark on current main gives:

[100.00%] ··· join_merge.MergeDatetime.time_merge           ok
[100.00%] ··· ============== ============= =================
              --                            tz              
              -------------- -------------------------------
                  units           None      Europe/Brussels 
              ============== ============= =================
               ('ns', 'ns')    16.1±0.3ms      16.1±0.3ms   
               ('ms', 'ms')   20.7±0.07ms      21.0±0.2ms   
               ('ns', 'ms')     167±2ms        16.9±0.4ms   
              ============== ============= =================

versus on this branch:

[100.00%] ··· ============== ============= =================
              --                            tz              
              -------------- -------------------------------
                  units           None      Europe/Brussels 
              ============== ============= =================
               ('ns', 'ns')   6.33±0.09ms      6.44±0.1ms   
               ('ms', 'ms')    6.95±0.2ms      7.06±0.1ms   
               ('ns', 'ms')   6.90±0.08ms     6.91±0.09ms   
              ============== ============= =================

(and running the standard ns/ns example on pandas 1.5, it also gave around 7ms, so this should restore the performance of 1.5)

@jorisvandenbossche jorisvandenbossche added Timeseries Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode labels May 15, 2023
@jorisvandenbossche jorisvandenbossche added this to the 2.0.2 milestone May 15, 2023
Comment on lines +2393 to +2398
if needs_i8_conversion(lk.dtype) and lk.dtype == rk.dtype:
# GH#23917 TODO: Needs tests for non-matching dtypes
# GH#23917 TODO: needs tests for case where lk is integer-dtype
# and rk is datetime-dtype
lk = np.asarray(lk, dtype=np.int64)
rk = np.asarray(rk, dtype=np.int64)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part was (accidentally?) removed in #49876. So there won't be any tests failing from leaving this out (because falling back to the object factorizer also gives correct results), but it is necessary to ensure we use the faster factorizer for datetimelikes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i expect that the relevant cases should go through the elif isinstance(lk, ExtensionArray) and lk.dtype == rk.dtype: branch above?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not fully:

  1. It did go through there before but only for tz-naive data (and same unit), now we already convert to a numpy array in the if isinstance(lk.dtype, DatetimeTZDtype) ... block by getting the _ndarray of lk and rk (but we could not do that there, and let it go through the block that calls _values_for_factorize)
  2. even when going through this block, the DatetimeArray._values_for_factorize returns the _ndarray, which is the M8 numpy array, not an integer numpy array. So we still need this needs_i8_conversion.
    Now, we should maybe change _values_for_factorize? (but that's probably more for something targetting 2.1, and didn't check the other places where _values_for_factorize is being used)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, thanks.

@@ -2355,12 +2355,14 @@ def _factorize_keys(
rk = extract_array(rk, extract_numpy=True, extract_range=True)
# TODO: if either is a RangeIndex, we can likely factorize more efficiently?

if isinstance(lk.dtype, DatetimeTZDtype) and isinstance(rk.dtype, DatetimeTZDtype):
if (isinstance(lk, DatetimeArray) and isinstance(rk, DatetimeArray)) and (
lk._is_tzawareness_compat(rk)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is checking this way more performant?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea, but that was also not the reason for using this: what we want to check here is that we have either two DatetimeTZDtypes, or two np.dtype("M8").

So initially I expanded the original if isinstance(lk.dtype, DatetimeTZDtype) and isinstance(rk.dtype, DatetimeTZDtype) with something like or (isinstance(lk.dtype, np.dtype) and lk.dtype.kind == "M" and isinstance(rk.dtype, np.dtype) and rk.dtype.kind == "M"). But this got a bit long, and so this seemed simpler (and since we are calling a specific DatetimeArray method in the if-block, it also make sense code-logic-wise to check for it).

(the performance benefit of the PR comes from avoiding using the ObjectFactorizer)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I prefer the status quo here largely bc it avoids introducing a new method. Not a huge deal though.

BTW new optimized check for isinstance(lk.dtype, np.dtype) and lk.dtype.kind == "M" is lib.is_np_dtype(lk.dtype, "M")

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, with is_np_dtype that can be shorter, so reverted back to the original dtype checks (and now with the added dtype check for numpy dtypes), avoiding the new method

@datapythonista datapythonista modified the milestones: 2.0.2, 2.0.3 May 26, 2023
@jorisvandenbossche
Copy link
Member Author

@jbrockmendel anything else here, or good to go?

Copy link
Member

@jbrockmendel jbrockmendel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mroeschke
Copy link
Member

Thanks @jorisvandenbossche

meeseeksmachine pushed a commit to meeseeksmachine/pandas that referenced this pull request May 31, 2023
@jorisvandenbossche jorisvandenbossche deleted the perf-merge-datetime branch May 31, 2023 17:46
mroeschke added a commit that referenced this pull request Jun 1, 2023
… columns to not use object-dtype factorizer) (#53471)

* Backport PR #53231: PERF: fix merging on datetimelike columns to not use object-dtype factorizer

* Update pandas/core/reshape/merge.py

* Check npdtype

---------

Co-authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
Co-authored-by: Matthew Roeschke <10647082+mroeschke@users.noreply.github.com>
topper-123 pushed a commit to topper-123/pandas that referenced this pull request Jun 5, 2023
…torizer (pandas-dev#53231)

* PERF: fix merging on datetimelike columns to not use object-dtype factorizer

* don't try to match resos for Arrow extension array

* fix typing

* try fix typing

* update dtypes check

* add whatsnew
Daquisu pushed a commit to Daquisu/pandas that referenced this pull request Jul 8, 2023
…torizer (pandas-dev#53231)

* PERF: fix merging on datetimelike columns to not use object-dtype factorizer

* don't try to match resos for Arrow extension array

* fix typing

* try fix typing

* update dtypes check

* add whatsnew
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode Timeseries
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants