Skip to content
Permalink
Browse files Browse the repository at this point in the history
FIX make sure pre_dispatch cannot do arbitrary code execution (#1321)
  • Loading branch information
adrinjalali committed Sep 5, 2022
1 parent 1fdf308 commit b90f10e
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 2 deletions.
4 changes: 4 additions & 0 deletions CHANGES.rst
Expand Up @@ -28,6 +28,10 @@ Development version
specific assembly.
https://github.com/joblib/joblib/pull/1254

- Fix a security issue where ``eval(pre_dispatch)`` could potentially run
arbitrary code. Now only basic numerics are supported.
https://github.com/joblib/joblib/pull/1321

Release 1.1.0
--------------

Expand Down
10 changes: 8 additions & 2 deletions joblib/parallel.py
Expand Up @@ -504,7 +504,9 @@ class Parallel(Logger):
pre_dispatch: {'all', integer, or expression, as in '3*n_jobs'}
The number of batches (of tasks) to be pre-dispatched.
Default is '2*n_jobs'. When batch_size="auto" this is reasonable
default and the workers should never starve.
default and the workers should never starve. Note that only basic
arithmetics are allowed here and no modules can be used in this
expression.
batch_size: int or 'auto', default: 'auto'
The number of atomic tasks to dispatch at once to each
worker. When individual evaluations are very fast, dispatching
Expand Down Expand Up @@ -1049,7 +1051,11 @@ def _batched_calls_reducer_callback():
else:
self._original_iterator = iterator
if hasattr(pre_dispatch, 'endswith'):
pre_dispatch = eval(pre_dispatch)
pre_dispatch = eval(
pre_dispatch,
{"n_jobs": n_jobs, "__builtins__": {}}, # globals
{} # locals
)
self._pre_dispatch_amount = pre_dispatch = int(pre_dispatch)

# The main thread will consume the first pre_dispatch items and
Expand Down

0 comments on commit b90f10e

Please sign in to comment.