Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: MultiIndex equals method got 20x slower #43549

Closed
2 of 3 tasks
grotmol opened this issue Sep 13, 2021 · 4 comments · Fixed by #43589
Closed
2 of 3 tasks

PERF: MultiIndex equals method got 20x slower #43549

grotmol opened this issue Sep 13, 2021 · 4 comments · Fixed by #43589
Labels
MultiIndex Performance Memory or execution speed performance
Milestone

Comments

@grotmol
Copy link

grotmol commented Sep 13, 2021

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the master branch of pandas.

Reproducible Example

from dateutil import tz
import pandas as pd
from time import time

dates = pd.date_range('2010-01-01', periods=1000, tz=tz.tzutc())
index = pd.MultiIndex.from_product([range(100), dates])
index2 = index.copy()
start_time = time()
index.equals(index2)
used_time = time() - start_time
print("Took", int(used_time*1000), "milliseconds")

Installed Versions

commit : 73c6825
python : 3.9.6.final.0
python-bits : 64
OS : Darwin
OS-release : 20.5.0
Version : Darwin Kernel Version 20.5.0: Sat May 8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64
machine : x86_64
processor : i386
byteorder : little
LC_ALL : en_US.UTF-8
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.3.3
numpy : 1.21.2
pytz : 2021.1
dateutil : 2.8.2
pip : 21.2.4
setuptools : 57.4.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

Prior Performance

This issue did not exist in pandas 1.2.5. The timing measured by the script with pandas 1.2.5 on my computer is about 30 milliseconds, while the timing in pandas 1.3.1, 1.3.2 and 1.3.3 is about 600 ms. Thus there has been a 20x increase in run time. This severely impacts the performance of our application. A section of our code that used to take about 30 seconds, now takes about 10 minutes (that code uses pd.concat a lot, which in turn calls MultiIndex.equals).

@grotmol grotmol added Needs Triage Issue that has not been reviewed by a pandas team member Performance Memory or execution speed performance labels Sep 13, 2021
@grotmol
Copy link
Author

grotmol commented Sep 13, 2021

I have run pyspy on the code. That reveals that all the time is spent on line 419 in the pandas/core/dtypes/missing.py file, inside the array_equivalent method:

def array_equivalent(
    left, right, strict_nan: bool = False, dtype_equal: bool = False
) -> bool:
    left, right = np.asarray(left), np.asarray(right)

@grotmol
Copy link
Author

grotmol commented Sep 13, 2021

The full "stack trace" from pyspy is:

  • equals (pandas/core/indexes.multi.py:3520)
  • array_equivalent (pandas/core/dtypes/missing.py:419)
  • __array__ (pandas/core/arrays/datetimes.py:593)
  • __array__ (pandas/core/arrays/datetimelike.py:312) [about 96% of the run-time]
  • __iter__ (pandas/core/arrays/datetimes.py:620) [about 40% of the run-time]

@jbrockmendel
Copy link
Member

Best guess is it has to do with the warnings.catch_warnings now in DatetimeArray.__iter__, which in turn is needed bc of a warning issued by Timestamp. If correct, the issue will resolve itself, when those warnings are removed in 2.0. But of course we don't want to wait until then.

Two options come to mind: a) go into MultiIndex.equals and do something to avoid calling array_equivalent, b) go into ints_to_pydatetime and avoid calling the Timestamp constructor directly

@jbrockmendel
Copy link
Member

Better guess:

In MultiIndex.equals we have changed the way we construct self_values from

self_values = algos.take_nd(
                np.asarray(self.levels[i]._values), self_codes, allow_fill=False
            )

to

self_values = self.levels[i]._values.take(self_codes)

the latter is a DatetimeArray in this case, and np.asarray gets called on it in the array_equivalent call that follows. As a result, np.asarray is being called on a much larger array, so is much slower.

Option 1: construct self_values in something better-resembling the old version
Option 2: change array_equivalent to not call np.asarray so as to be EA-compatible

Option 2 is the better option.

@jreback jreback added MultiIndex and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Sep 16, 2021
@jreback jreback added this to the 1.3.4 milestone Sep 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
MultiIndex Performance Memory or execution speed performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants