Permalink
Browse files

Add docs for extra options in patched Spark for Cook (#946)

  • Loading branch information...
DaoWen authored and pschorf committed Aug 31, 2018
1 parent af656f1 commit 3da84fbf3ddb43d363cc3bf33b30b058d9280ac3
Showing with 16 additions and 0 deletions.
  1. +16 −0 spark/README.md
@@ -68,6 +68,22 @@ requires shuffle-service):
--conf spark.dynamicAllocation.maxExecutors=10
```

## Cook-specific options

* `spark.cores.max`:
The number of cores to request of Cook (default 0).
* `spark.executor.failures`:
The number of times an executor can fail before Spark will not relaunch the executor (default 5).
An executor fails after using its 5 attempts in Cook.
* `spark.cook.priority`:
Priority weight [0, 100] of the submitted tasks (default 75). Higher values mean higher priority.
* `spark.cook.cores.per.job.max`:
The number of cores per task (default 5). The total number of tasks is computed using the following formula:
`ceil(spark.cores.max / spark.cook.cores.per.max.job)`
* `spark.executor.cook.hdfs.conf.remote`:
If set, must be the URI accessible from the Cook cluster.
The payload should be a gzipped tarball, which will be unpacked in the classpath to configure HDFS.

For more configuration options please refer to [Dynamic Allocation](http://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) documentation.

© Two Sigma Open Source, LLC

0 comments on commit 3da84fb

Please sign in to comment.