-
-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird performance characteristics of resampled quantile() function (100 times slower) #26150
Comments
try on master as a patch was recently merge for this |
The performance has improved a bit, but it is still 50 times slower. Moreover, the performance for the whole dataframe got much worse.
|
Any updates on this? I want to work on this if it's not occupied. |
Doesn't seem like anyone is working on it, feel free to take a look.
…On Sun, May 26, 2019 at 5:03 AM Sangwoong Yoon ***@***.***> wrote:
Any updates on this? I want to work on this if it's not occupied.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#26150?email_source=notifications&email_token=AAKAOIVSF3MUZJXS7HKBHKDPXJOAVA5CNFSM4HHD53J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWICHYI#issuecomment-495985633>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAKAOIRFIKQNXSADUZJDZITPXJOAVANCNFSM4HHD53JQ>
.
|
Basically, the Cython quantile function of Pandas is way slower than that of Numpy. This can be confirmed by the following snippet. Numpy Quantile Functionimport numpy as np
from time import time
N = int(1e7)
data = np.arange(N)
time_s = time()
np_result = np.quantile(data, 0.5)
print('time', time() - time_s)
print('result', np_result) which results in
Pandas Cython Function
import numpy as np
import pandas._libs.groupby as libgroupby
N = int(1e7)
data = np.arange(N)
time_s = time()
libgroupby.group_quantile(out=a, labels=np.ones((N,), dtype='int'),
values=data, mask=np.zeros(N, dtype='uint8'),
q=0.5, interpolation='linear')
print('time', time() - time_s)
cython_result = a[~np.isnan(a)][0]
print('result', cython_result) which results in
@TomAugspurger @jreback What do you think of this issue? Updating |
@swyoon you are comparing apples and oranges numpy also only handles a small set of dtypes and further does not handle all of the ties correctly you are welcome to profile |
Seems like efforts here have stalled and may not be relevant anymore with recent version of pandas or numpy, closing until we have more recent profiling results |
Code Sample, a copy-pastable example if possible
Output:
Problem description
I have noticed that
quantile()
is sometimes extremely slow even thoughmedian()
(which should have a similar run time) is not. While debugging the behaviour, I have found the following two problems:quantile()
on a series instead of a dataframe, then the operation is much slower. In the example above, it is 100 times slower than the fastest equivalent.Output of
pd.show_versions()
INSTALLED VERSIONS
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.4.0-17134-Microsoft
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: None
pip: 19.0.2
setuptools: 40.8.0
Cython: None
numpy: 1.16.2
scipy: 1.2.1
pyarrow: None
xarray: None
IPython: 7.4.0
sphinx: None
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2018.9
blosc: None
bottleneck: None
tables: 3.5.1
numexpr: 2.6.9
feather: None
matplotlib: 3.0.3
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: 1.3.1
pymysql: None
psycopg2: None
jinja2: 2.10
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None
The text was updated successfully, but these errors were encountered: