layout | title |
---|---|
global |
Job Scheduling |
- This will become a table of contents (this text will be scraped). {:toc}
Spark has several facilities for scheduling resources between computations. First, recall that, as described in the cluster mode overview, each Spark application (instance of SparkContext) runs an independent set of executor processes. The cluster managers that Spark runs on provide facilities for scheduling across applications. Second, within each Spark application, multiple "jobs" (Spark actions) may be running concurrently if they were submitted by different threads. This is common if your application is serving requests over the network; for example, the Shark server works this way. Spark includes a fair scheduler to schedule resources within each SparkContext.
When running on a cluster, each Spark application gets an independent set of executor JVMs that only run tasks and store data for that application. If multiple users need to share your cluster, there are different options to manage allocation, depending on the cluster manager.
The simplest option, available on all cluster managers, is static partitioning of resources. With this approach, each application is given a maximum amount of resources it can use, and holds onto them for its whole duration. This is the approach used in Spark's standalone and YARN modes, as well as the coarse-grained Mesos mode. Resource allocation can be configured as follows, based on the cluster type:
- Standalone mode: By default, applications submitted to the standalone mode cluster will run in
FIFO (first-in-first-out) order, and each application will try to use all available nodes. You can limit
the number of nodes an application uses by setting the
spark.cores.max
system property in it. This will allow multiple users/applications to run concurrently. For example, you might launch a long-running server that uses 10 cores, and allow users to launch shells that use 20 cores each. Finally, in addition to controlling cores, each application'sspark.executor.memory
setting controls its memory use. - Mesos: To use static partitioning on Mesos, set the
spark.mesos.coarse
system property totrue
, and optionally setspark.cores.max
to limit each application's resource share as in the standalone mode. You should also setspark.executor.memory
to control the executor memory. - YARN: The
--num-workers
option to the Spark YARN client controls how many workers it will allocate on the cluster, while--worker-memory
and--worker-cores
control the resources per worker.
A second option available on Mesos is dynamic sharing of CPU cores. In this mode, each Spark application
still has a fixed and independent memory allocation (set by spark.executor.memory
), but when the
application is not running tasks on a machine, other applications may run tasks on those cores. This mode
is useful when you expect large numbers of not overly active applications, such as shell sessions from
separate users. However, it comes with a risk of less predictable latency, because it may take a while for
an application to gain back cores on one node when it has work to do. To use this mode, simply use a
mesos://
URL without setting spark.mesos.coarse
to true.
Note that none of the modes currently provide memory sharing across applications. If you would like to share data this way, we recommend running a single server application that can serve multiple requests by querying the same RDDs. For example, the Shark JDBC server works this way for SQL queries. In future releases, in-memory storage systems such as Tachyon will provide another approach to share RDDs.
Inside a given Spark application (SparkContext instance), multiple parallel jobs can run simultaneously if
they were submitted from separate threads. By "job", in this section, we mean a Spark action (e.g. save
,
collect
) and any tasks that need to run to evaluate that action. Spark's scheduler is fully thread-safe
and supports this use case to enable applications that serve multiple requests (e.g. queries for
multiple users).
By default, Spark's scheduler runs jobs in FIFO fashion. Each job is divided into "stages" (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don't need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly.
Starting in Spark 0.8, it is also possible to configure fair sharing between jobs. Under fair sharing, Spark assigns tasks between jobs in a "round robin" fashion, so that all jobs get a roughly equal share of cluster resources. This means that short jobs submitted while a long job is running can start receiving resources right away and still get good response times, without waiting for the long job to finish. This mode is best for multi-user settings.
To enable the fair scheduler, simply set the spark.scheduler.mode
to FAIR
before creating
a SparkContext:
{% highlight scala %} System.setProperty("spark.scheduler.mode", "FAIR") {% endhighlight %}
The fair scheduler also supports grouping jobs into pools, and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a "high-priority" pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares. This approach is modeled after the Hadoop Fair Scheduler.
Without any intervention, newly submitted jobs go into a default pool, but jobs' pools can be set by
adding the spark.scheduler.pool
"local property" to the SparkContext in the thread that's submitting them.
This is done as follows:
{% highlight scala %} // Assuming context is your SparkContext variable context.setLocalProperty("spark.scheduler.pool", "pool1") {% endhighlight %}
After setting this local property, all jobs submitted within this thread (by calls in this thread
to RDD.save
, count
, collect
, etc) will use this pool name. The setting is per-thread to make
it easy to have a thread run multiple jobs on behalf of the same user. If you'd like to clear the
pool that a thread is associated with, simply call:
{% highlight scala %} context.setLocalProperty("spark.scheduler.pool", null) {% endhighlight %}
By default, each pool gets an equal share of the cluster (also equal in share to each job in the default pool), but inside each pool, jobs run in FIFO order. For example, if you create one pool per user, this means that each user will get an equal share of the cluster, and that each user's queries will run in order instead of later queries taking resources from that user's earlier ones.
Specific pools' properties can also be modified through a configuration file. Each pool supports three properties:
schedulingMode
: This can be FIFO or FAIR, to control whether jobs within the pool queue up behind each other (the default) or share the pool's resources fairly.weight
: This controls the pool's share of the cluster relative to other pools. By default, all pools have a weight of 1. If you give a specific pool a weight of 2, for example, it will get 2x more resources as other active pools. Setting a high weight such as 1000 also makes it possible to implement priority between pools---in essence, the weight-1000 pool will always get to launch tasks first whenever it has jobs active.minShare
: Apart from an overall weight, each pool can be given a minimum shares (as a number of CPU cores) that the administrator would like it to have. The fair scheduler always attempts to meet all active pools' minimum shares before redistributing extra resources according to the weights. TheminShare
property can therefore be another way to ensure that a pool can always get up to a certain number of resources (e.g. 10 cores) quickly without giving it a high priority for the rest of the cluster. By default, each pool'sminShare
is 0.
The pool properties can be set by creating an XML file, similar to conf/fairscheduler.xml.template
,
and setting the spark.scheduler.allocation.file
property:
{% highlight scala %} System.setProperty("spark.scheduler.allocation.file", "/path/to/file") {% endhighlight %}
The format of the XML file is simply a <pool>
element for each pool, with different elements
within it for the various settings. For example:
{% highlight xml %}
FAIR 1 2 FIFO 2 3 {% endhighlight %}A full example is also available in conf/fairscheduler.xml.template
. Note that any pools not
configured in the XML file will simply get default values for all settings (scheduling mode FIFO,
weight 1, and minShare 0).