You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
river7816 opened this issue
Jan 18, 2024
· 4 comments
Labels
bug 🦗Something isn't workingExternalPull requests and issues from people who do not regularly contribute to modinMemory 💾Issues related to memoryP3Very minor bugs, or features we can hopefully add some day.
At first, I did not think it was a bug. Thus I report it as issure #6864. Then I use import pandas as pd and it works fine, so it must be modin problem. BTW, the modin pandas read_feather is slower than the pandas.
I want to concatenate 5100 dataframes into one big dataframe. The memory usage of these 5100 dataframes is about 160G. However, the processing speed is really slow. I have been waiting for about 40 minutes, but it still hasn't finished. My machine has 128 cores, I am certain that they are not all utilized for processing.
UserWarning: `DataFrame.to_feather` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
(raylet) [2024-01-18 12:25:56,935 E 1383634 1383634] (raylet) node_manager.cc:3022: 6 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:26:59,006 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:28:01,160 E 1383634 1383634] (raylet) node_manager.cc:3022: 8 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
(raylet) [2024-01-18 12:29:07,013 E 1383634 1383634] (raylet) node_manager.cc:3022: 5 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 695b81b72bd60e890b66e98b555a4048a90dbdd0909377d214beb80f, IP: 192.168.8.180) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 192.168.8.180`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
@river7816
Add a comment
Comment
Add your comment here...
Remember, contributions to this repository should follow its [code of conduct](https://github.com/modin-project/modin/blob/602f8664e480def8edac7ad45aa9d68e1b3ad7c8/CODE_OF_CONDUCT.md).
Assignees
No one assigned
Labels
[question ❓](https://github.com/modin-project/modin/labels/question%20%E2%9D%93)
[Triage 🩹](https://github.com/modin-project/modin/labels/Triage%20%F0%9F%A9%B9)
Projects
None yet
Milestone
No milestone
Development
No branches or pull requests
Notifications
Customize
You’re receiving notifications because you authored the thread.
1 participant
@river7816
Installed Versions
UserWarning: Setuptools is replacing distutils.
INSTALLED VERSIONS
commit : 47a9a4a
python : 3.10.13.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-39-generic
Version : #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8
Hi @river7816! Thanks for the contribution!
How much RAM is available on your machine?
320G, the code work fine with pandas
Distributed computing generally requires more memory than non-distributed computing. It seems that in this particular situation there is not enough memory due to the use of function to_feather, which is not optimized and simply calls the pandas (during which a complete copy of the entire dataframe is created). Maybe you have the opportunity to use parquet format, to_parquet is optimized and may consume less memory.
anmyachev
added
Memory 💾
Issues related to memory
External
Pull requests and issues from people who do not regularly contribute to modin
P3
Very minor bugs, or features we can hopefully add some day.
and removed
Triage 🩹
Issues that need triage
labels
Jan 19, 2024
Hi @river7816! Thanks for the contribution!
How much RAM is available on your machine?
320G, the code work fine with pandas
Distributed computing generally requires more memory than non-distributed computing. It seems that in this particular situation there is not enough memory due to the use of function to_feather, which is not optimized and simply calls the pandas (during which a complete copy of the entire dataframe is created). Maybe you have the opportunity to use parquet format, to_parquet is optimized and may consume less memory.
bug 🦗Something isn't workingExternalPull requests and issues from people who do not regularly contribute to modinMemory 💾Issues related to memoryP3Very minor bugs, or features we can hopefully add some day.
Modin version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest released version of Modin.
I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow this guide.)
Reproducible Example
Issue Description
At first, I did not think it was a bug. Thus I report it as issure #6864. Then I use
import pandas as pd
and it works fine, so it must be modin problem. BTW, the modin pandas read_feather is slower than the pandas.I want to concatenate 5100 dataframes into one big dataframe. The memory usage of these 5100 dataframes is about 160G. However, the processing speed is really slow. I have been waiting for about 40 minutes, but it still hasn't finished. My machine has 128 cores, I am certain that they are not all utilized for processing.
Expected Behavior
It should work fine and return this.
Error Logs
Installed Versions
UserWarning: Setuptools is replacing distutils.
INSTALLED VERSIONS
commit : 47a9a4a
python : 3.10.13.final.0
python-bits : 64
OS : Linux
OS-release : 6.2.0-39-generic
Version : #40~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 16 10:53:04 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : zh_CN.UTF-8
LOCALE : zh_CN.UTF-8
Modin dependencies
modin : 0.26.0
ray : 2.9.0
dask : 2024.1.0
distributed : 2024.1.0
hdk : None
pandas dependencies
pandas : 2.1.4
numpy : 1.26.2
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.2.2
pip : 23.3.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : 1.4.6
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.20.0
pandas_datareader : None
bs4 : 4.12.2
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.12.2
gcsfs : None
matplotlib : 3.8.2
numba : 0.58.1
numexpr : 2.8.8
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
pyarrow : 14.0.2
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : 1.10.1
sqlalchemy : 1.4.50
tables : 3.9.2
tabulate : 0.9.0
xarray : 2023.12.0
xlrd : 1.2.0
zstandard : 0.22.0
tzdata : 2023.4
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: