Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Python] pyarrow.Table.from_pandas() causing memory leak #37989

Closed
RizzoV opened this issue Oct 3, 2023 · 7 comments
Closed

[Python] pyarrow.Table.from_pandas() causing memory leak #37989

RizzoV opened this issue Oct 3, 2023 · 7 comments

Comments

@RizzoV
Copy link

RizzoV commented Oct 3, 2023

Describe the bug, including details regarding any error messages, version, and platform.

Issue Description

(continuing from pandas-dev/pandas#55296)

pyarrow.Table.from_pandas() causes a memory leak on DataFrames containing nested structs. A sample problematic data schema and a compliant data generator is included in the Reproducible Example below.

From the Reproducible Example:

  • 1st pa.Table.from_pandas() call:
Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
    74     91.9 MiB     91.9 MiB           1   @profile
    75                                         def convert_df_to_table(df: pd.DataFrame):
    76     91.9 MiB      0.0 MiB           1       table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))
  • 2000th call:
Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
    74    140.1 MiB    140.1 MiB           1   @profile
    75                                         def convert_df_to_table(df: pd.DataFrame):
    76    140.1 MiB      0.0 MiB           1       table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))
  • 10000th call:
Line #    Mem usage    Increment  Occurrences   Line Contents
=============================================================
    74    329.4 MiB    329.4 MiB           1   @profile
    75                                         def convert_df_to_table(df: pd.DataFrame):
    76    329.5 MiB      0.0 MiB           1       table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))

Reproducible Example

import os
import string
import sys
from random import choice, randint
from uuid import uuid4

import pandas as pd
import pyarrow as pa
from memory_profiler import profile

sample_schema = pa.struct(
    [
        ("a", pa.string()),
        (
            "b",
            pa.struct(
                [
                    ("ba", pa.list_(pa.string())),
                    ("bc", pa.string()),
                    ("bd", pa.string()),
                    ("be", pa.list_(pa.string())),
                    (
                        "bf",
                        pa.list_(
                            pa.struct(
                                [
                                    (
                                        "bfa",
                                        pa.struct(
                                            [
                                                ("bfaa", pa.string()),
                                                ("bfab", pa.string()),
                                                ("bfac", pa.string()),
                                                ("bfad", pa.float64()),
                                                ("bfae", pa.string()),
                                            ]
                                        ),
                                    )
                                ]
                            )
                        ),
                    ),
                ]
            ),
        ),
        ("c", pa.int64()),
        ("d", pa.int64()),
        ("e", pa.string()),
        (
            "f",
            pa.struct(
                [
                    ("fa", pa.string()),
                    ("fb", pa.string()),
                    ("fc", pa.string()),
                    ("fd", pa.string()),
                    ("fe", pa.string()),
                    ("ff", pa.string()),
                    ("fg", pa.string()),
                ]
            ),
        ),
        ("g", pa.int64()),
    ]
)


def generate_random_string(str_length: int) -> str:
    return "".join(
        [choice(string.ascii_lowercase + string.digits) for n in range(str_length)]
    )


@profile
def convert_df_to_table(df: pd.DataFrame) -> None:
     table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))


def generate_random_data():
    return {
        "a": [generate_random_string(128)],
        "b": [
            {
                "ba": [generate_random_string(128) for i in range(50)],
                "bc": generate_random_string(128),
                "bd": generate_random_string(128),
                "be": [generate_random_string(128) for i in range(50)],
                "bf": [
                    {
                        "bfa": {
                            "bfaa": generate_random_string(128),
                            "bfab": generate_random_string(128),
                            "bfac": generate_random_string(128),
                            "bfad": randint(0, 2**32),
                            "bfae": generate_random_string(128),
                        }
                    }
                ],
            }
        ],
        "c": [randint(0, 2**32)],
        "d": [randint(0, 2**32)],
        "e": [generate_random_string(128)],
        "f": [
            {
                "fa": generate_random_string(128),
                "fb": generate_random_string(128),
                "fc": generate_random_string(128),
                "fd": generate_random_string(128),
                "fe": generate_random_string(128),
                "ff": generate_random_string(128),
                "fg": generate_random_string(128),
            }
        ],
        "g": [randint(0, 2**32)],
    }


def main():
    for i in range(10000):
        df = pd.DataFrame.from_dict(generate_random_data())
        # pa.jemalloc_set_decay_ms(0)
        convert_df_to_table(df)  # memory leak


if __name__ == "__main__":
    main()

Installed Versions

INSTALLED VERSIONS
------------------
python              : 3.10.9.final.0
python-bits         : 64
OS                  : Darwin
OS-release          : 22.6.0
Version             : Darwin Kernel Version 22.6.0: Fri Sep 15 13:39:52 PDT 2023; root:xnu-8796.141.3.700.8~1/RELEASE_X86_64
machine             : x86_64
processor           : i386
byteorder           : little
LC_ALL              : None
LANG                : it_IT.UTF-8
LOCALE              : it_IT.UTF-8

pyarrow             : 13.0.0
pandas              : 2.1.1
numpy               : 1.26.0

Component(s)

Python

@jorisvandenbossche
Copy link
Member

@RizzoV thanks for the report and nice reproducer!

I can reproduce this running your example with memray:

newplot(2)

From the memray stats, it looks like the memory being held at the end is mostly coming from the list with strings, so somehow the conversion to arrow seems to keep those list object alive (haven't yet looked at how that is possible, though).
And also the pandas metadata conversion (the json dump) seems to accumulate memory, although that's a bit strange (but I don't see that in the smaller reproducer below).

It seems it is specifically happens when having a list that is nested inside another column (eg struct of list), so I can reproduce the observation as well with this simplified example:

import string
from random import choice

import pandas as pd
import pyarrow as pa


sample_schema = pa.struct(
    [
        ( "a", pa.struct([("aa", pa.list_(pa.string()))])),
    ]
)


def generate_random_string(str_length: int) -> str:
    return "".join(
        [choice(string.ascii_lowercase + string.digits) for n in range(str_length)]
    )


def generate_random_data():
    return {
        "a": [{"aa": [generate_random_string(128) for i in range(50)]}],
    }


def main():
    for i in range(10000):
        df = pd.DataFrame.from_dict(generate_random_data())
        # pa.jemalloc_set_decay_ms(0)
        table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))


if __name__ == "__main__":
    main()

@Ashokcs94
Copy link

@RizzoV / @jorisvandenbossche : any solution for the memory leak in to_parquet() ?, we are also facing this issue for long time

@RizzoV
Copy link
Author

RizzoV commented Dec 6, 2023

@Ashokcs94 no solution from my side sadly, we still have to work around it

@chunyang
Copy link
Contributor

chunyang commented Mar 7, 2024

I believe I found a fix for this in #40412, please take a look :)

jorisvandenbossche pushed a commit that referenced this issue Mar 15, 2024
…m Python list of dicts (#40412)

### Rationale for this change

When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types.

### What changes are included in this PR?

This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`.

### Are these changes tested?

The change was tested against the following reproduction script:
```python
"""Repro memory increase observed when creating pyarrow arrays."""

# System imports
import logging

# Third-party imports
import numpy as np
import psutil
import pyarrow as pa

LIST_LENGTH = 5 * (2**20)
LOGGER = logging.getLogger(__name__)

def initialize_logging() -> None:
    logging.basicConfig(
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
        level=logging.INFO,
    )

def get_rss_in_mib() -> float:
    """Return the Resident Set Size of the current process in MiB."""
    return psutil.Process().memory_info().rss / 1024 / 1024

def main() -> None:
    initialize_logging()

    for idx in range(100):
        data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8)
        # data = "a" * LIST_LENGTH
        pa.array([{"data": data}])
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

    LOGGER.info("---------")

    for idx in range(100):
        pa.array(
            [
                np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(),
            ]
        )
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

if __name__ == "__main__":
    main()
```

Prior to this change, the reproduction script produces the following output:
```
2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB
2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB
2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB
2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB
2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB
2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB
2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB
2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB
2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB
2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB
2024-03-07 23:14:18,789 - __main__ - INFO - ---------
2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB
```

After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict.
```
2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB
2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB
2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB
2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB
2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB
2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - ---------
2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB
```

When this change is tested against the reproduction script provided in #37989 (comment), the reported memory increase is no longer observed.

I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go.

### Are there any user-facing changes?

* GitHub Issue: #37989

Authored-by: Chuck Yang <chuck.yang@getcruise.com>
Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
@jorisvandenbossche jorisvandenbossche added this to the 16.0.0 milestone Mar 15, 2024
@jorisvandenbossche
Copy link
Member

Issue resolved by pull request 40412
#40412

galipremsagar pushed a commit to galipremsagar/arrow that referenced this issue Apr 15, 2024
…ay from Python list of dicts (apache#40412)

### Rationale for this change

When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types.

### What changes are included in this PR?

This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`.

### Are these changes tested?

The change was tested against the following reproduction script:
```python
"""Repro memory increase observed when creating pyarrow arrays."""

# System imports
import logging

# Third-party imports
import numpy as np
import psutil
import pyarrow as pa

LIST_LENGTH = 5 * (2**20)
LOGGER = logging.getLogger(__name__)

def initialize_logging() -> None:
    logging.basicConfig(
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
        level=logging.INFO,
    )

def get_rss_in_mib() -> float:
    """Return the Resident Set Size of the current process in MiB."""
    return psutil.Process().memory_info().rss / 1024 / 1024

def main() -> None:
    initialize_logging()

    for idx in range(100):
        data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8)
        # data = "a" * LIST_LENGTH
        pa.array([{"data": data}])
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

    LOGGER.info("---------")

    for idx in range(100):
        pa.array(
            [
                np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(),
            ]
        )
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

if __name__ == "__main__":
    main()
```

Prior to this change, the reproduction script produces the following output:
```
2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB
2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB
2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB
2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB
2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB
2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB
2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB
2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB
2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB
2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB
2024-03-07 23:14:18,789 - __main__ - INFO - ---------
2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB
```

After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict.
```
2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB
2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB
2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB
2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB
2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB
2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - ---------
2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB
```

When this change is tested against the reproduction script provided in apache#37989 (comment), the reported memory increase is no longer observed.

I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go.

### Are there any user-facing changes?

* GitHub Issue: apache#37989

Authored-by: Chuck Yang <chuck.yang@getcruise.com>
Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
galipremsagar pushed a commit to galipremsagar/arrow that referenced this issue Apr 15, 2024
…ay from Python list of dicts (apache#40412)

### Rationale for this change

When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types.

### What changes are included in this PR?

This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`.

### Are these changes tested?

The change was tested against the following reproduction script:
```python
"""Repro memory increase observed when creating pyarrow arrays."""

# System imports
import logging

# Third-party imports
import numpy as np
import psutil
import pyarrow as pa

LIST_LENGTH = 5 * (2**20)
LOGGER = logging.getLogger(__name__)

def initialize_logging() -> None:
    logging.basicConfig(
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
        level=logging.INFO,
    )

def get_rss_in_mib() -> float:
    """Return the Resident Set Size of the current process in MiB."""
    return psutil.Process().memory_info().rss / 1024 / 1024

def main() -> None:
    initialize_logging()

    for idx in range(100):
        data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8)
        # data = "a" * LIST_LENGTH
        pa.array([{"data": data}])
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

    LOGGER.info("---------")

    for idx in range(100):
        pa.array(
            [
                np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(),
            ]
        )
        if (idx + 1) % 10 == 0:
            LOGGER.info(
                "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib()
            )

if __name__ == "__main__":
    main()
```

Prior to this change, the reproduction script produces the following output:
```
2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB
2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB
2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB
2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB
2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB
2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB
2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB
2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB
2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB
2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB
2024-03-07 23:14:18,789 - __main__ - INFO - ---------
2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB
2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB
```

After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict.
```
2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB
2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB
2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB
2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB
2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB
2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB
2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,132 - __main__ - INFO - ---------
2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB
2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB
```

When this change is tested against the reproduction script provided in apache#37989 (comment), the reported memory increase is no longer observed.

I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go.

### Are there any user-facing changes?

* GitHub Issue: apache#37989

Authored-by: Chuck Yang <chuck.yang@getcruise.com>
Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
deadlycoconuts added a commit to caraml-dev/merlin that referenced this issue Sep 23, 2024
# Description
It seems like there's a [memory
leak](apache/arrow#37989) in older `pyarrow`
versions and this fix has only been [addressed
recently](https://github.com/apache/arrow/commits/apache-arrow-16.0.0?after=6a28035c2b49b432dc63f5ee7524d76b4ed2d762+174)
in `pyarrow==16.0.0`. Since we have `pyarrow` pinned to `<11.0.0` within
the batch predictor, this PR loosens this restriction and simply sets it
to allow the latest released version `pyarrow<=17.0.0` instead.

# Modifications
- `python/batch-predictor/requirements.txt` - Loosening of `pyarrow`
version pinned

# Tests
<!-- Besides the existing / updated automated tests, what specific
scenarios should be tested? Consider the backward compatibility of the
changes, whether corner cases are covered, etc. Please describe the
tests and check the ones that have been completed. Eg:
- [x] Deploying new and existing standard models
- [ ] Deploying PyFunc models
-->

# Checklist
- [x] Added PR label
- [ ] Added unit test, integration, and/or e2e tests
- [ ] Tested locally
- [ ] Updated documentation
- [ ] Update Swagger spec if the PR introduce API changes
- [ ] Regenerated Golang and Python client if the PR introduces API
changes

# Release Notes
<!--
Does this PR introduce a user-facing change?
If no, just write "NONE" in the release-note block below.
If yes, a release note is required. Enter your extended release note in
the block below.
If the PR requires additional action from users switching to the new
release, include the string "action required".

For more information about release notes, see kubernetes' guide here:
http://git.k8s.io/community/contributors/guide/release-notes.md
-->

```release-note
NONE
```
@nicksilver
Copy link

nicksilver commented Nov 4, 2024

Not sure if I should open up a new bug report, but I am having the same memory leak issue when writing a list of strings (not dicts) to parquet from pandas (pyarrow=18.0.0). It seems like a similar issue and probably similar fix.

@DatSplit
Copy link

DatSplit commented Nov 4, 2024

@nicksilver I suspect that I'm having the same issue/ a similar right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants