How to set CPU and memory size for a Ray job? #3942
Unanswered
hongbo-miao
asked this question in
Q&A
Replies: 1 comment 6 replies
-
|
Hi @hongbo-miao , Daft will attempt to use all of the resources available on the cluster by default, so no need to set those parameters. If you are running a UDF, then take a look at our UDF docs to take a look at how to set the CPU and memory requirements for the UDF specifically. For reference, here are the relevant Ray docs for ray.init(): https://docs.ray.io/en/latest/ray-core/starting-ray.html |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
My Ray cluster has different size of nodes.
Here is my current code:
I originally thought I can set memory and CPU by
However, this gives me error
In a standard Ray job, I can set
@ray.remote(num_cpus=10, memory=100 * 10**9)How to set How to set CPU and memory size for a Ray job when use with Daft? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions