Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: concat slow, manual concat through reindexing enhances performance #50652

Closed
3 tasks done
bckim1318 opened this issue Jan 10, 2023 · 9 comments · Fixed by #52685
Closed
3 tasks done

PERF: concat slow, manual concat through reindexing enhances performance #50652

bckim1318 opened this issue Jan 10, 2023 · 9 comments · Fixed by #52685
Labels
Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode

Comments

@bckim1318
Copy link

bckim1318 commented Jan 10, 2023

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this issue exists on the latest version of pandas.

  • I have confirmed this issue exists on the main branch of pandas.

Reproducible Example

from itertools import product
import time
import numpy as np
import pandas as pd

NUM_ROWS = [100, 150, 200]
NUM_COLUMNS = [1000, 10000, 50000]
NUM_DFS = [3, 5, 7, 9]
test_cases = product(NUM_ROWS, NUM_COLUMNS, NUM_DFS)

def manual_concat(df_list):
    columns = [col for df in df_list for col in df.columns]
    columns = list(set(columns))
    index = np.hstack([df.index.values for df in df_list])
    df_list = [df.reindex(columns=columns) for df in df_list]
    values = np.vstack([df.values for df in df_list])
    return pd.DataFrame(values, index=index, columns=columns)

def compare_dataframes(df1, df2, margin=1e-4):
    return abs(df1.fillna(0) - df2.fillna(0)[df1.columns]).sum().sum() < margin

def generate_dataframes(num_dfs, num_rows, num_cols, all_cols):
    df_list = []
    for i in range(num_dfs):
        index = ['i%d'%i for i in range(i*num_rows, (i+1)*num_rows)]
        columns = np.random.choice(all_cols, num_cols, replace=False)
        values = np.random.uniform(-100, 100, [num_rows, num_cols])
        df_list.append(pd.DataFrame(values, index=index, columns=columns))
    return df_list

for ti, t in enumerate(test_cases):
    num_rows = t[0]
    num_cols = t[1]
    num_dfs = t[2]
    num_all_cols = num_cols * num_dfs * 4 // 5
    all_cols = ['c%i'%i for i in range(num_all_cols)]
    df_list = generate_dataframes(num_dfs, num_rows, num_cols, all_cols)
    
    t1 = time.time()
    df_pandas = pd.concat(df_list)
    t2 = time.time()
    df_manual = manual_concat(df_list)
    t3 = time.time()

    print('Testcase %d'%(ti + 1))
    print('NUM_ROWS: %d, NUM_COLS: %d, NUM_DFS: %d'%(num_rows, num_cols, num_dfs))
    print('Pandas: %.2f'%(t2-t1))
    print('Manual: %.2f'%(t3-t2))
    print(compare_dataframes(df_pandas, df_manual))

output:

Testcase 1
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.07
Manual: 0.01
True
Testcase 2
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.17
Manual: 0.01
True
Testcase 3
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.28
Manual: 0.03
True
Testcase 4
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.44
Manual: 0.05
True
Testcase 5
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.76
Manual: 0.06
True
Testcase 6
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.82
Manual: 0.13
True
Testcase 7
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.21
Manual: 0.28
True
Testcase 8
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.85
Manual: 0.44
True
Testcase 9
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.21
Manual: 0.34
True
Testcase 10
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.84
Manual: 0.77
True
Testcase 11
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 16.49
Manual: 1.67
True
Testcase 12
NUM_ROWS: 100, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.20
Manual: 2.77
True
Testcase 13
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.20
Manual: 0.15
True
Testcase 14
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.17
Manual: 0.02
True
Testcase 15
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.30
Manual: 0.04
True
Testcase 16
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.43
Manual: 0.06
True
Testcase 17
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.73
Manual: 0.08
True
Testcase 18
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.87
Manual: 0.21
True
Testcase 19
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.25
Manual: 0.40
True
Testcase 20
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.95
Manual: 0.62
True
Testcase 21
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.26
Manual: 0.45
True
Testcase 22
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.86
Manual: 1.23
True
Testcase 23
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 17.29
Manual: 2.35
True
Testcase 24
NUM_ROWS: 150, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.49
Manual: 3.79
True
Testcase 25
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.22
Manual: 0.25
True
Testcase 26
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.18
Manual: 0.03
True
Testcase 27
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.34
Manual: 0.05
True
Testcase 28
NUM_ROWS: 200, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.44
Manual: 0.08
True
Testcase 29
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.74
Manual: 0.10
True
Testcase 30
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 1.98
Manual: 0.26
True
Testcase 31
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.28
Manual: 0.49
True
Testcase 32
NUM_ROWS: 200, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.97
Manual: 0.78
True
Testcase 33
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 3
Pandas: 4.44
Manual: 0.56
True
Testcase 34
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 5
Pandas: 9.86
Manual: 1.43
True
Testcase 35
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 7
Pandas: 17.43
Manual: 2.83
True
Testcase 36
NUM_ROWS: 200, NUM_COLS: 50000, NUM_DFS: 9
Pandas: 25.68
Manual: 5.25
True

Installed Versions

INSTALLED VERSIONS

commit : 8dab54d
python : 3.10.8.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : None
LOCALE : Korean_Korea.949

pandas : 1.5.2
numpy : 1.24.1
pytz : 2022.7
dateutil : 2.8.2
setuptools : 63.2.0
pip : 22.3.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

Prior Performance

No response

@bckim1318 bckim1318 added Needs Triage Issue that has not been reviewed by a pandas team member Performance Memory or execution speed performance labels Jan 10, 2023
@topper-123
Copy link
Contributor

Your manual_concat function does not give the same output as pd.concat, in particualt the columns are ordered differently. To ensure the result, your compare_dataframes should be:

def compare_dataframes(df1, df2):
    return df1.equals(df2)

The ordering difference may be the reason for the different performance result, as ordering may costly performance-wise.

Can you redo your example where the columns ordering is the same as for pd.concat, or alternatively, explain why the ordering in your example is better (faster) than the current ordering and if it can be incorporated in a backwards compatible manner.

@topper-123 topper-123 added the Reshaping Concat, Merge/Join, Stack/Unstack, Explode label Jan 25, 2023
@topper-123
Copy link
Contributor

topper-123 commented Jan 25, 2023

Ok, I've redone your example below, where the result dataframes are actually equal (only code changes are where I've noted # this line was changed!:

from itertools import product
import time
import numpy as np
import pandas as pd

NUM_ROWS = [100, 150, 200]
NUM_COLUMNS = [1000, 10000, 50000]
NUM_DFS = [3, 5, 7, 9]
test_cases = product(NUM_ROWS, NUM_COLUMNS, NUM_DFS)

def manual_concat(df_list):
    columns = [col for df in df_list for col in df.columns]
    columns = list(dict.fromkeys(columns))    # this line was changed!
    index = np.hstack([df.index.values for df in df_list])
    df_list = [df.reindex(columns=columns) for df in df_list]
    values = np.vstack([df.values for df in df_list])
    return pd.DataFrame(values, index=index, columns=columns)

def compare_dataframes(df1, df2, margin=1e-4):
    return df1.equals(df2)  # this line was changed! 

def generate_dataframes(num_dfs, num_rows, num_cols, all_cols):
    df_list = []
    for i in range(num_dfs):
        index = ['i%d'%i for i in range(i*num_rows, (i+1)*num_rows)]
        columns = np.random.choice(all_cols, num_cols, replace=False)
        values = np.random.uniform(-100, 100, [num_rows, num_cols])
        df_list.append(pd.DataFrame(values, index=index, columns=columns))
    return df_list

for ti, t in enumerate(test_cases):
    num_rows = t[0]
    num_cols = t[1]
    num_dfs = t[2]
    num_all_cols = num_cols * num_dfs * 4 // 5
    all_cols = ['c%i'%i for i in range(num_all_cols)]
    df_list = generate_dataframes(num_dfs, num_rows, num_cols, all_cols)
    
    t1 = time.time()
    df_pandas = pd.concat(df_list)
    t2 = time.time()
    df_manual = manual_concat(df_list)
    t3 = time.time()

    print('Testcase %d'%(ti + 1))
    print('NUM_ROWS: %d, NUM_COLS: %d, NUM_DFS: %d'%(num_rows, num_cols, num_dfs))
    print('Pandas: %.2f'%(t2-t1))
    print('Manual: %.2f'%(t3-t2))
    print(compare_dataframes(df_pandas, df_manual))

It confirms your results:

Testcase 1
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.20
Manual: 0.02
True
Testcase 2
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.12
Manual: 0.01
True
Testcase 3
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.22
Manual: 0.02
True
Testcase 4
NUM_ROWS: 100, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.35
Manual: 0.05
True
Testcase 5
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.83
Manual: 0.05
True
Testcase 6
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 2.08
Manual: 0.15
True
Testcase 7
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.18
Manual: 0.27
True
Testcase 8
NUM_ROWS: 100, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 4.83
Manual: 0.56
True
Testcase 9
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 3
Pandas: 0.12
Manual: 0.01
True
Testcase 10
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 5
Pandas: 0.40
Manual: 0.02
True
Testcase 11
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 7
Pandas: 0.23
Manual: 0.03
True
Testcase 12
NUM_ROWS: 150, NUM_COLS: 1000, NUM_DFS: 9
Pandas: 0.58
Manual: 0.04
True
Testcase 13
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 3
Pandas: 0.78
Manual: 0.07
True
Testcase 14
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 5
Pandas: 2.12
Manual: 0.18
True
Testcase 15
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 7
Pandas: 3.30
Manual: 0.48
True
Testcase 16
NUM_ROWS: 150, NUM_COLS: 10000, NUM_DFS: 9
Pandas: 5.12
Manual: 1.03
True

Of course there may be reasons why the native solution is slower (various checks, broader use case etc.) but this does warrant an examination IMO.

Can you ping back if you want to submit a PR for the issue. Else I'll take a look.

@topper-123 topper-123 removed the Needs Triage Issue that has not been reviewed by a pandas team member label Jan 25, 2023
@bckim1318
Copy link
Author

ping @topper-123
OK I’ll work on it

@pdemarti
Copy link

pdemarti commented Mar 22, 2023

Not sure if related, but we are struggling mightily to get decent performance of concat with pandas>=1.4.4 (tested through 1.5.3).

Even in the case where the frames are all entirely float64, we see speed degradation in the hundred-fold when some frames have been reindexed such that columns of NaNs appear (by contrast to an underlying contiguous numpy array with some columns being NaN).

Here is a somewhat simpler setup than the one suggested in this ticket.

Setup

import pandas as pd
import numpy as np

>>> pd.__version__
'1.5.3'

np.random.seed(0)  # reproducible example
n, m = 100, 200
k = 10
frames0 = [
    pd.DataFrame(np.random.uniform(size=(n, m)),
            columns=[f'c{i:04d}' for i in range(m)])
    for _ in range(k)
]

# one mask per frame: where mask is True: no change; False: NaN or remove
masks = [np.random.randint(0, 2, m, dtype=bool) for _ in range(k)]

# null out entire columns
frames1 = [
    df.mul(np.where(mask, 1, np.nan))
    for df, mask in zip(frames0, masks)
]

# remove entire columns, then re-index
frames2 = [
    df.iloc[:, mask].reindex_like(df)
    for df, mask in zip(frames0, masks)
]

Timings

The absolute timing values don't matter, but their ratios do. FWIW, that was run on a r5.2xlarge EC2 instance.

Using IPython's lime-magic %timeit:

dt0 = %timeit -o pd.concat(frames0)
# 707 µs ± 862 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

dt1 = %timeit -o pd.concat(frames1)
# 886 µs ± 643 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

dt2 = %timeit -o pd.concat(frames2)
# 453 ms ± 3.62 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

>>> dt2.best / dt1.best
509.53007324284977

# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 2.31 ms ± 23.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Equalities:

>>> all([df1.equals(df2) for df1, df2 in zip(frames1, frames2)])
True

>>> pd.concat(frames1).equals(pd.concat(frames2))
True

With timeit instead (easier to add to an integration test suite):

import timeit

results = [
    timeit.Timer(
        'pd.concat(frames)',
        'import pandas as pd',
        globals={'frames': frames},
    ).autorange()
    for frames in [frames0, frames1, frames2]
]
results = [t / n for n, t in results]

>>> results[2] / results[1]
473.753

The results are similar for the versions I checked: 1.4.4, 1.5.2 and 1.5.3.

However, for 1.4.2, the seemed much better (next comment).

@pdemarti
Copy link

With pandas=1.4.2, we get, for the same test:

>>> pd.__version__
'1.4.2'

dt0 = %timeit -o pd.concat(frames0)
# 676 µs ± 6.12 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

dt1 = %timeit -o pd.concat(frames1)
# 674 µs ± 780 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

dt2 = %timeit -o pd.concat(frames2)
# 17.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

>>> dt2.best / dt1.best
25.59028179864691

# and, ironically
dt3 = %timeit -o pd.concat([df.copy() for df in frames2])
# 3.33 ms ± 21.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@pdemarti
Copy link

We implemented a quick fix for homogeneous frames that is similar to the one proposed by @bckim1318:

from functools import reduce

def np_concat(frames):
    a = np.vstack([df.to_numpy() for df in frames])
    cols = frames[0].columns
    ix = reduce(lambda x, y: x.append(y), [df.index for df in frames])
    # this also gets the index name if they are all consistent
    return pd.DataFrame(a, index=ix, columns=cols)

assert np_concat([df for df in frames2]).equals(pd.concat(frames1))

dt4 = %timeit -o np_concat([df for df in frames2])
# 509 µs ± 3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

While it is much better in performance, it of course causes issues with heterogeneous frames. For instance:

df = pd.DataFrame({'a': [1, 7], 'b': ['foo', 'bar'], 'c': [1.2, 2.3]})

>>> vars(df)
{'_is_copy': None,
 '_mgr': BlockManager
 Items: Index(['a', 'b', 'c'], dtype='object')
 Axis 1: RangeIndex(start=0, stop=2, step=1)
 NumericBlock: slice(0, 1, 1), 1 x 2, dtype: int64
 ObjectBlock: slice(1, 2, 1), 1 x 2, dtype: object
 NumericBlock: slice(2, 3, 1), 1 x 2, dtype: float64,  # 3 distinct blocks (nice)
 '_item_cache': {},
 '_attrs': {},
 '_flags': <Flags(allows_duplicate_labels=True)>}

But:

>>> df.to_numpy()
array([[1, 'foo', 1.2],
       [7, 'bar', 2.3]], dtype=object)  # numpy can only be homogeneous

And thus:

>>> vars(np_concat([df]))
{'_is_copy': None,
 '_mgr': BlockManager
 Items: Index(['a', 'b', 'c'], dtype='object')
 Axis 1: RangeIndex(start=0, stop=2, step=1)
 ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object,   # single object block
 '_item_cache': {},
 '_attrs': {},
 '_flags': <Flags(allows_duplicate_labels=True)>}

Which may affect downstream operations.

That said, df.T.T also collapses the frame to a homogeneous single "object" block:

>>> vars(df.T.T)
{'_is_copy': None,
 '_mgr': BlockManager
 Items: Index(['a', 'b', 'c'], dtype='object')
 Axis 1: RangeIndex(start=0, stop=2, step=1)
 ObjectBlock: slice(0, 3, 1), 3 x 2, dtype: object,
 '_item_cache': {},
 '_attrs': {},
 '_flags': <Flags(allows_duplicate_labels=True)>}

@topper-123
Copy link
Contributor

Thanks for the testing @pdemarti, your examples are very clear.

Shame that this is difficult to fix. Can you do a bisect on the pandas git repo to pinpoint the PR where this performance degradation happened? That could be the best next step to fix this.

I will take a look at my PR over the coming days (#51419). You're of course also very welcome to take a look/propose commit to that PR.

@pdemarti
Copy link

I did bisect, @topper-123 : this first commit where the ratio above jumps from 31 to 690 is ad7dc56. The commit comment indicates work in that area:

| * ad7dc56 - (9 months ago) Backport PR #47372 on branch 1.4.x (REGR: revert behaviour change for concat with empty/all-NaN data) (#47472)

The commit immediately before that, 30a3b98, gives a ratio of 34.91, similar to v1.4.2.

Personally, I think the ratio should be close 1.0. In fact, by simply doing a df.copy() for the frames before concat, you get a perf. ratio of 1.02 instead of 30 (v1.4.2) or 690 (v1.4.3).

@jbrockmendel
Copy link
Member

I've spent some time tracking this down, recording some preliminary observations here:

  1. We lost a decent amount of perf in REGR: revert behaviour change for concat with empty/all-NaN data #47372 that we'll get back if/when we un-revert it, which may need a deprecation cycle.
  2. _maybe_reindex_columns_na_proxy is breaking single-block Managers into many-block Managers, which has a huge subsequent performance hit.
  3. The manual concat in the OP gives differently-sorted results than the pandas concat
  4. _combine_concat_plans is a mess

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Performance Memory or execution speed performance Reshaping Concat, Merge/Join, Stack/Unstack, Explode
Projects
None yet
4 participants