-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Python] pyarrow.Table.from_pandas()
causing memory leak
#37989
Comments
@RizzoV thanks for the report and nice reproducer! I can reproduce this running your example with memray: From the memray stats, it looks like the memory being held at the end is mostly coming from the list with strings, so somehow the conversion to arrow seems to keep those list object alive (haven't yet looked at how that is possible, though). It seems it is specifically happens when having a list that is nested inside another column (eg struct of list), so I can reproduce the observation as well with this simplified example: import string
from random import choice
import pandas as pd
import pyarrow as pa
sample_schema = pa.struct(
[
( "a", pa.struct([("aa", pa.list_(pa.string()))])),
]
)
def generate_random_string(str_length: int) -> str:
return "".join(
[choice(string.ascii_lowercase + string.digits) for n in range(str_length)]
)
def generate_random_data():
return {
"a": [{"aa": [generate_random_string(128) for i in range(50)]}],
}
def main():
for i in range(10000):
df = pd.DataFrame.from_dict(generate_random_data())
# pa.jemalloc_set_decay_ms(0)
table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))
if __name__ == "__main__":
main() |
@RizzoV / @jorisvandenbossche : any solution for the memory leak in |
@Ashokcs94 no solution from my side sadly, we still have to work around it |
I believe I found a fix for this in #40412, please take a look :) |
…m Python list of dicts (#40412) ### Rationale for this change When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types. ### What changes are included in this PR? This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`. ### Are these changes tested? The change was tested against the following reproduction script: ```python """Repro memory increase observed when creating pyarrow arrays.""" # System imports import logging # Third-party imports import numpy as np import psutil import pyarrow as pa LIST_LENGTH = 5 * (2**20) LOGGER = logging.getLogger(__name__) def initialize_logging() -> None: logging.basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO, ) def get_rss_in_mib() -> float: """Return the Resident Set Size of the current process in MiB.""" return psutil.Process().memory_info().rss / 1024 / 1024 def main() -> None: initialize_logging() for idx in range(100): data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8) # data = "a" * LIST_LENGTH pa.array([{"data": data}]) if (idx + 1) % 10 == 0: LOGGER.info( "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) LOGGER.info("---------") for idx in range(100): pa.array( [ np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(), ] ) if (idx + 1) % 10 == 0: LOGGER.info( "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) if __name__ == "__main__": main() ``` Prior to this change, the reproduction script produces the following output: ``` 2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB 2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB 2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB 2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB 2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB 2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB 2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB 2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB 2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB 2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB 2024-03-07 23:14:18,789 - __main__ - INFO - --------- 2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB ``` After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict. ``` 2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB 2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB 2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB 2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB 2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB 2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - --------- 2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB ``` When this change is tested against the reproduction script provided in #37989 (comment), the reported memory increase is no longer observed. I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go. ### Are there any user-facing changes? * GitHub Issue: #37989 Authored-by: Chuck Yang <chuck.yang@getcruise.com> Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
Issue resolved by pull request 40412 |
…ay from Python list of dicts (apache#40412) ### Rationale for this change When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types. ### What changes are included in this PR? This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`. ### Are these changes tested? The change was tested against the following reproduction script: ```python """Repro memory increase observed when creating pyarrow arrays.""" # System imports import logging # Third-party imports import numpy as np import psutil import pyarrow as pa LIST_LENGTH = 5 * (2**20) LOGGER = logging.getLogger(__name__) def initialize_logging() -> None: logging.basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO, ) def get_rss_in_mib() -> float: """Return the Resident Set Size of the current process in MiB.""" return psutil.Process().memory_info().rss / 1024 / 1024 def main() -> None: initialize_logging() for idx in range(100): data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8) # data = "a" * LIST_LENGTH pa.array([{"data": data}]) if (idx + 1) % 10 == 0: LOGGER.info( "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) LOGGER.info("---------") for idx in range(100): pa.array( [ np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(), ] ) if (idx + 1) % 10 == 0: LOGGER.info( "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) if __name__ == "__main__": main() ``` Prior to this change, the reproduction script produces the following output: ``` 2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB 2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB 2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB 2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB 2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB 2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB 2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB 2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB 2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB 2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB 2024-03-07 23:14:18,789 - __main__ - INFO - --------- 2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB ``` After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict. ``` 2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB 2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB 2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB 2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB 2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB 2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - --------- 2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB ``` When this change is tested against the reproduction script provided in apache#37989 (comment), the reported memory increase is no longer observed. I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go. ### Are there any user-facing changes? * GitHub Issue: apache#37989 Authored-by: Chuck Yang <chuck.yang@getcruise.com> Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
…ay from Python list of dicts (apache#40412) ### Rationale for this change When creating Arrow arrays using `pa.array` from lists of dicts, memory usage is observed to increase over time despite the created arrays going out of scope. The issue appears to only happen for lists of dicts, as opposed to lists of numpy arrays or other types. ### What changes are included in this PR? This PR makes two changes to _python_to_arrow.cc_, to ensure that new references created by [`PyDict_Items`](https://docs.python.org/3/c-api/dict.html#c.PyDict_Items) and [`PySequence_GetItem`](https://docs.python.org/3/c-api/sequence.html#c.PySequence_GetItem) are properly reference counted via `OwnedRef`. ### Are these changes tested? The change was tested against the following reproduction script: ```python """Repro memory increase observed when creating pyarrow arrays.""" # System imports import logging # Third-party imports import numpy as np import psutil import pyarrow as pa LIST_LENGTH = 5 * (2**20) LOGGER = logging.getLogger(__name__) def initialize_logging() -> None: logging.basicConfig( format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", level=logging.INFO, ) def get_rss_in_mib() -> float: """Return the Resident Set Size of the current process in MiB.""" return psutil.Process().memory_info().rss / 1024 / 1024 def main() -> None: initialize_logging() for idx in range(100): data = np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8) # data = "a" * LIST_LENGTH pa.array([{"data": data}]) if (idx + 1) % 10 == 0: LOGGER.info( "%d dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) LOGGER.info("---------") for idx in range(100): pa.array( [ np.random.randint(256, size=(LIST_LENGTH,), dtype=np.uint8).tobytes(), ] ) if (idx + 1) % 10 == 0: LOGGER.info( "%d non-dict arrays created, RSS: %.2f MiB", idx + 1, get_rss_in_mib() ) if __name__ == "__main__": main() ``` Prior to this change, the reproduction script produces the following output: ``` 2024-03-07 23:14:17,560 - __main__ - INFO - 10 dict arrays created, RSS: 121.05 MiB 2024-03-07 23:14:17,698 - __main__ - INFO - 20 dict arrays created, RSS: 171.07 MiB 2024-03-07 23:14:17,835 - __main__ - INFO - 30 dict arrays created, RSS: 221.09 MiB 2024-03-07 23:14:17,971 - __main__ - INFO - 40 dict arrays created, RSS: 271.11 MiB 2024-03-07 23:14:18,109 - __main__ - INFO - 50 dict arrays created, RSS: 320.86 MiB 2024-03-07 23:14:18,245 - __main__ - INFO - 60 dict arrays created, RSS: 371.65 MiB 2024-03-07 23:14:18,380 - __main__ - INFO - 70 dict arrays created, RSS: 422.18 MiB 2024-03-07 23:14:18,516 - __main__ - INFO - 80 dict arrays created, RSS: 472.20 MiB 2024-03-07 23:14:18,650 - __main__ - INFO - 90 dict arrays created, RSS: 522.21 MiB 2024-03-07 23:14:18,788 - __main__ - INFO - 100 dict arrays created, RSS: 572.23 MiB 2024-03-07 23:14:18,789 - __main__ - INFO - --------- 2024-03-07 23:14:19,001 - __main__ - INFO - 10 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,211 - __main__ - INFO - 20 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,417 - __main__ - INFO - 30 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,623 - __main__ - INFO - 40 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:19,832 - __main__ - INFO - 50 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,047 - __main__ - INFO - 60 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,253 - __main__ - INFO - 70 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,499 - __main__ - INFO - 80 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,725 - __main__ - INFO - 90 non-dict arrays created, RSS: 567.61 MiB 2024-03-07 23:14:20,950 - __main__ - INFO - 100 non-dict arrays created, RSS: 567.61 MiB ``` After this change, the output changes to the following. Notice that the Resident Set Size (RSS) no longer increases as more Arrow arrays are created from list of dict. ``` 2024-03-07 23:14:47,246 - __main__ - INFO - 10 dict arrays created, RSS: 81.73 MiB 2024-03-07 23:14:47,353 - __main__ - INFO - 20 dict arrays created, RSS: 76.53 MiB 2024-03-07 23:14:47,445 - __main__ - INFO - 30 dict arrays created, RSS: 82.20 MiB 2024-03-07 23:14:47,537 - __main__ - INFO - 40 dict arrays created, RSS: 86.59 MiB 2024-03-07 23:14:47,634 - __main__ - INFO - 50 dict arrays created, RSS: 80.28 MiB 2024-03-07 23:14:47,734 - __main__ - INFO - 60 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,827 - __main__ - INFO - 70 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:47,921 - __main__ - INFO - 80 dict arrays created, RSS: 85.44 MiB 2024-03-07 23:14:48,024 - __main__ - INFO - 90 dict arrays created, RSS: 82.94 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - 100 dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,132 - __main__ - INFO - --------- 2024-03-07 23:14:48,229 - __main__ - INFO - 10 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,324 - __main__ - INFO - 20 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,420 - __main__ - INFO - 30 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,516 - __main__ - INFO - 40 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,613 - __main__ - INFO - 50 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,710 - __main__ - INFO - 60 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,806 - __main__ - INFO - 70 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:48,905 - __main__ - INFO - 80 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,009 - __main__ - INFO - 90 non-dict arrays created, RSS: 87.84 MiB 2024-03-07 23:14:49,108 - __main__ - INFO - 100 non-dict arrays created, RSS: 87.84 MiB ``` When this change is tested against the reproduction script provided in apache#37989 (comment), the reported memory increase is no longer observed. I have not added a unit test, but it may be possible to add one similar to the reproduction scripts used above, provided there's an accurate way to capture process memory usage on all the platforms that Arrow supports, and provided memory usage is not affected by concurrently running tests. If this code could be tested under valgrind, that may be an even better way to go. ### Are there any user-facing changes? * GitHub Issue: apache#37989 Authored-by: Chuck Yang <chuck.yang@getcruise.com> Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
# Description It seems like there's a [memory leak](apache/arrow#37989) in older `pyarrow` versions and this fix has only been [addressed recently](https://github.com/apache/arrow/commits/apache-arrow-16.0.0?after=6a28035c2b49b432dc63f5ee7524d76b4ed2d762+174) in `pyarrow==16.0.0`. Since we have `pyarrow` pinned to `<11.0.0` within the batch predictor, this PR loosens this restriction and simply sets it to allow the latest released version `pyarrow<=17.0.0` instead. # Modifications - `python/batch-predictor/requirements.txt` - Loosening of `pyarrow` version pinned # Tests <!-- Besides the existing / updated automated tests, what specific scenarios should be tested? Consider the backward compatibility of the changes, whether corner cases are covered, etc. Please describe the tests and check the ones that have been completed. Eg: - [x] Deploying new and existing standard models - [ ] Deploying PyFunc models --> # Checklist - [x] Added PR label - [ ] Added unit test, integration, and/or e2e tests - [ ] Tested locally - [ ] Updated documentation - [ ] Update Swagger spec if the PR introduce API changes - [ ] Regenerated Golang and Python client if the PR introduces API changes # Release Notes <!-- Does this PR introduce a user-facing change? If no, just write "NONE" in the release-note block below. If yes, a release note is required. Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string "action required". For more information about release notes, see kubernetes' guide here: http://git.k8s.io/community/contributors/guide/release-notes.md --> ```release-note NONE ```
Not sure if I should open up a new bug report, but I am having the same memory leak issue when writing a list of strings (not dicts) to parquet from pandas (pyarrow=18.0.0). It seems like a similar issue and probably similar fix. |
@nicksilver I suspect that I'm having the same issue/ a similar right now. |
Describe the bug, including details regarding any error messages, version, and platform.
Issue Description
(continuing from pandas-dev/pandas#55296)
pyarrow.Table.from_pandas()
causes a memory leak on DataFrames containing nested structs. A sample problematic data schema and a compliant data generator is included in the Reproducible Example below.From the Reproducible Example:
pa.Table.from_pandas()
call:Reproducible Example
Installed Versions
Component(s)
Python
The text was updated successfully, but these errors were encountered: