You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In makeClusterFunctionsMulticore, the argument max.load defaults to ncpus - 1. I think this should rather be detectCores() - 1 or detectCores() (not even related to the mc.cores option). The load should be limited by the available physical resources, not by what we allow the current process to use.
Example: Assume four cores, two of which are busy. I want to limit a computation to the two remaining cores and therefore say makeClusterFunctionsMulticore(2), to the effect that no workers are allowed to start because the load (2) is clearly larger than ncpus - 1.
On that note, what is the practical difference between ncpus and max.jobs? #62 is slightly related.
The text was updated successfully, but these errors were encountered:
Yep, that is a much better default. Note that we are well aware that the our choices are pretty defensive (as we want to avoid automatic over-utilization). We kind of expect the user to choose sane and efficient defaults.
In
makeClusterFunctionsMulticore
, the argumentmax.load
defaults toncpus - 1
. I think this should rather bedetectCores() - 1
ordetectCores()
(not even related to themc.cores
option). The load should be limited by the available physical resources, not by what we allow the current process to use.Example: Assume four cores, two of which are busy. I want to limit a computation to the two remaining cores and therefore say
makeClusterFunctionsMulticore(2)
, to the effect that no workers are allowed to start because the load (2) is clearly larger thanncpus - 1
.On that note, what is the practical difference between
ncpus
andmax.jobs
?#62 is slightly related.
The text was updated successfully, but these errors were encountered: