Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: DataFrame.pivot fails with pyarrow backend if variable is already encoded as dictionary #53051

Open
3 tasks done
randolf-scholz opened this issue May 3, 2023 · 6 comments · May be fixed by #59099
Open
3 tasks done
Assignees
Labels
Arrow pyarrow functionality Bug Reshaping Concat, Merge/Join, Stack/Unstack, Explode

Comments

@randolf-scholz
Copy link
Contributor

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd

df = (
    pd.DataFrame([("A", 1), ("B", 2), ("C", 3)], columns=["var", "val"])
    .astype({"var": "string", "val": "float32"})
    .astype({"var": "category", "val": "float32"})
)

# write and reload as parquet with pyarrow backend
df.to_parquet("demo.parquet")
df = pd.read_parquet("demo.parquet", dtype_backend="pyarrow")
print(df.dtypes)  # var is now dictionary[int32,string]

df.pivot(columns=["var"], values=["val"])  # ✘ ArrowNotImplementedError

Issue Description

This bug is caused by apache/arrow#34890: currently, pyarrow's dictionary_encode is not idempotent, i.e. it fails with ArrowNotImplementedError instead of returning the data as-is if the array is already of dictionary type.

So, either one waits until it is fixed upstream by pyarrow, or an additional check needs to be added to test whether the series is already of dictionary data type.

Expected Behavior

Pivot should work with categorical data when using the pyarrow backend.

Installed Versions

INSTALLED VERSIONS

commit : 37ea63d
python : 3.11.3.final.0
python-bits : 64
OS : Linux
OS-release : 5.19.0-41-generic
Version : #42~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 18 17:40:00 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 2.0.1
numpy : 1.23.5
pytz : 2023.3
dateutil : 2.8.2
setuptools : 67.7.2
pip : 23.1.2
Cython : 0.29.34
pytest : 7.3.1
hypothesis : None
sphinx : 7.0.0
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.2
html5lib : None
pymysql : 1.0.3
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.13.1
pandas_datareader: None
bs4 : 4.12.2
bottleneck : None
brotli : None
fastparquet : 2023.4.0
fsspec : 2023.4.0
gcsfs : None
matplotlib : 3.7.1
numba : 0.57.0rc1
numexpr : 2.8.4
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 11.0.0
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
snappy : None
sqlalchemy : 1.4.48
tables : 3.8.0
tabulate : 0.9.0
xarray : 2023.4.2
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None

@randolf-scholz randolf-scholz added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels May 3, 2023
@lithomas1 lithomas1 added Reshaping Concat, Merge/Join, Stack/Unstack, Explode Arrow pyarrow functionality and removed Needs Triage Issue that has not been reviewed by a pandas team member labels May 3, 2023
@lithomas1
Copy link
Member

Thanks for the report, looks like we should probably avoid factorizing then in pivot for dictionary pyarrow types. For reference, this is the traceback.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/thomasli/pandas/pandas/core/frame.py", line 8792, in pivot
    return pivot(self, index=index, columns=columns, values=values)
  File "/Users/thomasli/pandas/pandas/core/reshape/pivot.py", line 557, in pivot
    multiindex = MultiIndex.from_arrays(index_list)
  File "/Users/thomasli/pandas/pandas/core/indexes/multi.py", line 523, in from_arrays
    codes, levels = factorize_from_iterables(arrays)
  File "/Users/thomasli/pandas/pandas/core/arrays/categorical.py", line 2762, in factorize_from_iterables
    codes, categories = zip(*(factorize_from_iterable(it) for it in iterables))
  File "/Users/thomasli/pandas/pandas/core/arrays/categorical.py", line 2762, in <genexpr>
    codes, categories = zip(*(factorize_from_iterable(it) for it in iterables))
  File "/Users/thomasli/pandas/pandas/core/arrays/categorical.py", line 2735, in factorize_from_iterable
    cat = Categorical(values, ordered=False)
  File "/Users/thomasli/pandas/pandas/core/arrays/categorical.py", line 443, in __init__
    codes, categories = factorize(values, sort=True)
  File "/Users/thomasli/pandas/pandas/core/algorithms.py", line 743, in factorize
    return values.factorize(sort=sort, use_na_sentinel=use_na_sentinel)
  File "/Users/thomasli/pandas/pandas/core/base.py", line 1051, in factorize
    codes, uniques = algorithms.factorize(
  File "/Users/thomasli/pandas/pandas/core/algorithms.py", line 759, in factorize
    codes, uniques = values.factorize(use_na_sentinel=use_na_sentinel)
  File "/Users/thomasli/pandas/pandas/core/arrays/arrow/array.py", line 891, in factorize
    encoded = data.dictionary_encode(null_encoding=null_encoding)
  File "pyarrow/table.pxi", line 586, in pyarrow.lib.ChunkedArray.dictionary_encode
  File "pyarrow/_compute.pyx", line 560, in pyarrow._compute.call_function
  File "pyarrow/_compute.pyx", line 355, in pyarrow._compute.Function.call
  File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
  File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Function 'dictionary_encode' has no kernel matching input types (dictionary<values=string, indices=int32, ordered=0>)

@aswinj18
Copy link

aswinj18 commented May 3, 2023

take

@mroeschke mroeschke added the Upstream issue Issue related to pandas dependency label May 3, 2023
@chrisburesi
Copy link

chrisburesi commented Jun 6, 2023

For information, same type of issue , with example:
On 16Gb PC:
from a read_parquet with py_arrow engine, df pivots on 4 categorical types with one int as sum
-> memory allocation overload.

_**<class 'pandas.core.frame.DataFrame'>
Index: 3711683 entries, 14 to 9070
Data columns (total 5 columns):
Column Dtype

0 ntusername category
1 user category
2 techno category
3 server category
4 calls_count int64
memory usage: 276.1+ MB

test= pd.pivot_table(df, values=['calls_count'], index=[ 'textdata_tables','user','ntusername','date'], aggfunc=np.sum).reset_index()**_

=> MemoryError: Unable to allocate 10.8 GiB for an array with shape (8510, 682784) and data type int16

Saving in csv , reloading (pass categories in object) and executing the pivot returns valid pivot in a second.

@lithomas1
Copy link
Member

Fixed as of #53232 I think.

@randolf-scholz
Copy link
Contributor Author

With current libraries, (arrow=16.1, pandas=2.2.2, numpy=2.0.0) the example in my OP fails with

AttributeError: 'Series' object has no attribute '_pa_array'

@lithomas1 lithomas1 reopened this Jun 25, 2024
@lithomas1 lithomas1 removed the Upstream issue Issue related to pandas dependency label Jun 25, 2024
@randolf-scholz
Copy link
Contributor Author

randolf-scholz commented Jun 25, 2024

@lithomas1 The bug seems to be this line:

arr = values._pa_array.combine_chunks()

The issue is that values can be one of (ABCIndex, ABCSeries, ExtensionArray), but not all of these have the _pa_array attribute.
I wrote a naive patch here: #59099

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arrow pyarrow functionality Bug Reshaping Concat, Merge/Join, Stack/Unstack, Explode
Projects
None yet
5 participants