Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Issue #6864 Concat and sort_value slow, to_feather has Memory bug #6865

Open
3 tasks done
river7816 opened this issue Jan 18, 2024 · 4 comments
Open
3 tasks done
Labels
bug 🦗 Something isn't working External Pull requests and issues from people who do not regularly contribute to modin Memory 💾 Issues related to memory P3 Very minor bugs, or features we can hopefully add some day.

Comments

@river7816
Copy link

Modin version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest released version of Modin.

  • I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow this guide.)

Reproducible Example

import modin.pandas as pd
import os
import ray
from tqdm import tqdm

def merge_feather_files(folder_path, suffix):
    # 获取所有符合条件的文件名
    files = [f for f in os.listdir(folder_path) if f.endswith(suffix + '.feather')]

    # 读取所有文件并合并
    df_list = [pd.read_feather(os.path.join(folder_path, file)) for file in tqdm(files)]
    merged_df = pd.concat(df_list)

    # 按照日期列排序
    merged_df.sort_values(by='date', inplace=True)
    merged_df.to_feather('factors_date/factors'+suffix+'.feather')
    

# 使用方法示例
folder_path = 'factors_batch'  # 设置文件夹路径
for i in range(1,25):   
    merged_df = merge_feather_files(folder_path, f'_{i}')

Issue Description

At first, I did not think it was a bug. Thus I report it as issure #6864. Then I use import pandas as pd and it works fine, so it must be modin problem. BTW, the modin pandas read_feather is slower than the pandas.

I want to concatenate 5100 dataframes into one big dataframe. The memory usage of these 5100 dataframes is about 160G. However, the processing speed is really slow. I have been waiting for about 40 minutes, but it still hasn't finished. My machine has 128 cores, I am certain that they are not all utilized for processing.

Expected Behavior

100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5096/5096 [31:45<00:00,  2.67it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5096/5096 [33:31<00:00,  2.53it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5096/5096 [33:13<00:00,  2.56it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5096/5096 [33:09<00:00,  2.56it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5096/5096 [33:08<00:00,  2.56it/s]

It should work fine and return this.

Error Logs

UserWarning: `DataFrame.to_feather` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
(raylet) [2024-01-18 12:25:56,935 E 1383634 1383634] (raylet) node_manager.cc:3022: 6 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet) 
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:26:59,006 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet) 
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:28:01,160 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet) 
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:29:07,013 E 1383634 1383634] (raylet) node_manager.cc:3022: 5 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet) 
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.


@river7816


Add a comment
Comment
 
Add your comment here...
 
Remember, contributions to this repository should follow its [code of conduct](https://github.com/modin-project/modin/blob/602f8664e480def8edac7ad45aa9d68e1b3ad7c8/CODE_OF_CONDUCT.md).
Assignees
No one assigned
Labels
[question ❓](https://github.com/modin-project/modin/labels/question%20%E2%9D%93)
[Triage 🩹](https://github.com/modin-project/modin/labels/Triage%20%F0%9F%A9%B9)
Projects
None yet
Milestone
No milestone
Development
No branches or pull requests

Notifications
Customize
You’re receiving notifications because you authored the thread.
1 participant
@river7816

Installed Versions

UserWarning: Setuptools is replacing distutils.

INSTALLED VERSIONS

commit : 47a9a4a
python : 3.10.13.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-39-generic
Version : #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8

Modin dependencies

modin : 0.26.0
ray : 2.9.0
dask : 2024.1.0
distributed : 2024.1.0
hdk : None

pandas dependencies

pandas : 2.1.4
numpy : 1.26.2
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : 1.4.6
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.20.0
pandas_datareader : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : 3.8.2
numba : 0.58.1
numexpr : 2.8.8
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : 1.4.50
tables : 3.9.2
tabulate : 0.9.0
xarray : 2023.12.0
xlrd : 1.2.0
zstandard : 0.22.0
tzdata : 2023.4
qtpy : None
pyqt5 : None

@river7816 river7816 added bug 🦗 Something isn't working Triage 🩹 Issues that need triage labels Jan 18, 2024
@anmyachev
Copy link
Collaborator

Hi @river7816! Thanks for the contribution!

How much RAM is available on your machine?

@river7816
Copy link
Author

Hi @river7816! Thanks for the contribution!

How much RAM is available on your machine?

320G, the code work fine with pandas

@anmyachev
Copy link
Collaborator

Hi @river7816! Thanks for the contribution!
How much RAM is available on your machine?

320G, the code work fine with pandas

Distributed computing generally requires more memory than non-distributed computing. It seems that in this particular situation there is not enough memory due to the use of function to_feather, which is not optimized and simply calls the pandas (during which a complete copy of the entire dataframe is created). Maybe you have the opportunity to use parquet format, to_parquet is optimized and may consume less memory.

@anmyachev anmyachev added Memory 💾 Issues related to memory External Pull requests and issues from people who do not regularly contribute to modin P3 Very minor bugs, or features we can hopefully add some day. and removed Triage 🩹 Issues that need triage labels Jan 19, 2024
@river7816
Copy link
Author

Hi @river7816! Thanks for the contribution!
How much RAM is available on your machine?

320G, the code work fine with pandas

Distributed computing generally requires more memory than non-distributed computing. It seems that in this particular situation there is not enough memory due to the use of function to_feather, which is not optimized and simply calls the pandas (during which a complete copy of the entire dataframe is created). Maybe you have the opportunity to use parquet format, to_parquet is optimized and may consume less memory.

Tks, I will try parquet format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🦗 Something isn't working External Pull requests and issues from people who do not regularly contribute to modin Memory 💾 Issues related to memory P3 Very minor bugs, or features we can hopefully add some day.
Projects
None yet
Development

No branches or pull requests

2 participants