You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The python multiprocessing module has a a core limit, leading to problems when setting large number of cores in Python.
Currently, there is a process limit in AlphaPool, that caps at 50 processes. However, this limit is not used everywhere (e.g. n_jobs in ML). This leads to errors when running on systems with large number of CPUs.
The text was updated successfully, but these errors were encountered:
Should be straightforward to set in in the "set_worker_count" of the performance notebook, or are there some ML tasks where you really cannot update it?
(1) Use the MAX_WORKER_COUNT from the performance notebook everywhere (AlphaPool) and set this when starting a workflow according to the parameter in settings.
(2) To parallelize the ML tasks, we need to pass MAX_WORKER_COUNT to n_jobs (used in score and in alignment) so that it is used there.
(3) There are some edge cases (e.g., there is a bug that pyinstaller only allows n_jobs = 1).
The python multiprocessing module has a a core limit, leading to problems when setting large number of cores in Python.
Currently, there is a process limit in
AlphaPool
, that caps at 50 processes. However, this limit is not used everywhere (e.g. n_jobs in ML). This leads to errors when running on systems with large number of CPUs.The text was updated successfully, but these errors were encountered: