-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-19606][MESOS] Support constraints in spark-dispatcher #19543
Conversation
test this please |
Test build #82937 has finished for PR 19543 at commit
|
@felixcheung - fixed scala-style issues and also updated the docs to include the new property |
Jenkins, retest this please |
Test build #82946 has finished for PR 19543 at commit
|
My apologies. Reread the docs and ran style checks locally this time. Fingers crossed. Thanks @felixcheung |
docs/running-on-mesos.md
Outdated
<td><code>spark.mesos.driver.constraints</code></td> | ||
<td>(none)</td> | ||
<td> | ||
Same as <code>spark.mesos.constraints</code> except applied to drivers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add the this is only valid when launching jobs with the Mesos Dispatcher?
command, | ||
Map("spark.mesos.executor.home" -> "test", | ||
"spark.app.name" -> "test", | ||
"spark.mesos.driver.constraints" -> "c:v"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe test multiple constraints? (just to catch if someone breaks this code later)
Jenkins, retest this please |
Test build #82963 has finished for PR 19543 at commit
|
@susanxhuynh would you like to review? |
@ArtRand - thanks for the feedback. I made the suggested changes to the docs and added some more tests - including a test w/ multiple constraints |
docs/running-on-mesos.md
Outdated
<td>(none)</td> | ||
<td> | ||
Same as <code>spark.mesos.constraints</code> except applied to drivers when launched through the dispatcher. By default, | ||
all resource offers will be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe "all resource offers with sufficient resources will be accepted" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify in the description for spark.mesos.constraints
in the above cell that it only applies to executors.
Just one super-nit. |
LGTM. |
which release version is this intended for ? |
@ttashi-rms v2.2 and up |
@felixcheung - any chance of getting this merged into the upcoming 2.2.1 release? I cleaned up the merge conflict |
@susanxhuynh or anyone from the mesos side would you please review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmackles Thanks for the PR! A few nits, otherwise, LGTM.
docs/running-on-mesos.md
Outdated
<td>(none)</td> | ||
<td> | ||
Same as <code>spark.mesos.constraints</code> except applied to drivers when launched through the dispatcher. By default, | ||
all resource offers will be accepted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe clarify in the description for spark.mesos.constraints
in the above cell that it only applies to executors.
@@ -601,10 +602,14 @@ private[spark] class MesosClusterScheduler( | |||
for (submission <- candidates) { | |||
val driverCpu = submission.cores | |||
val driverMem = submission.mem | |||
logTrace(s"Finding offer to launch driver with cpu: $driverCpu, mem: $driverMem") | |||
val driverConstraints = | |||
parseConstraintString(submission.conf.get("spark.mesos.driver.constraints", "")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New config properties would ideally be defined in the config
object: https://github.com/apache/spark/blob/master/resource-managers/mesos/src/main/scala/org/apache/spark/deploy/mesos/config.scala#L24 and referenced by the variable name.
@@ -556,7 +556,8 @@ private[spark] class MesosClusterScheduler( | |||
|
|||
private class ResourceOffer( | |||
val offer: Offer, | |||
var remainingResources: JList[Resource]) { | |||
var remainingResources: JList[Resource], | |||
var attributes: JList[Attribute]) { | |||
override def toString(): String = { | |||
s"Offer id: ${offer.getId}, resources: ${remainingResources}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include the attributes
in this string?
@pmackles your changes works great. we had to update SJS (spark-job-server) side to support the new param for dispatcher. Ended up updating the manager_start.sh incase anyone is trying to figure it out. |
…ended changes from PR
4022c4a
to
158b6b9
Compare
@susanxhuynh, @ttashi-rms, @ArtRand - thanks for the feedback. I was able to make all of the suggested changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any other comment/feedback?
would you check within one day so that we could resolve this before cutting 2.2.1?
.doc("Attribute based constraints on mesos resource offers. Applied by the dispatcher " + | ||
"when launching drivers. Default is to accept all offers with sufficient resources.") | ||
.stringConf | ||
.createWithDefault("") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be default to ""? looks like it might still match something
Line 267 in fc45c2c
def matchesAttributeRequirements( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@felixcheung I think it's okay. There's a utility function, parseConstraintString
, which parses the input string into a Map. If the string is empty, it returns an empty Map:
Line 321 in fc45c2c
def parseConstraintString(constraintsVal: String): Map[String, Set[String]] = { |
This is also consistent with how the executor constraint string is parsed.
add to white list |
Jenkins, retest this please |
Test build #83718 has finished for PR 19543 at commit
|
@susanxhuynh, @ttashi-rms, @ArtRand please take a look |
LGTM |
A discussed in SPARK-19606, the addition of a new config property named "spark.mesos.constraints.driver" for constraining drivers running on a Mesos cluster Corresponding unit test added also tested locally on a Mesos cluster Please review http://spark.apache.org/contributing.html before opening a pull request. Author: Paul Mackles <pmackles@adobe.com> Closes #19543 from pmackles/SPARK-19606. (cherry picked from commit b3f9dbf) Signed-off-by: Felix Cheung <felixcheung@apache.org>
thanks @susanxhuynh |
A discussed in SPARK-19606, the addition of a new config property named "spark.mesos.constraints.driver" for constraining drivers running on a Mesos cluster Corresponding unit test added also tested locally on a Mesos cluster Please review http://spark.apache.org/contributing.html before opening a pull request. Author: Paul Mackles <pmackles@adobe.com> Closes apache#19543 from pmackles/SPARK-19606. (cherry picked from commit b3f9dbf) Signed-off-by: Felix Cheung <felixcheung@apache.org>
What changes were proposed in this pull request?
A discussed in SPARK-19606, the addition of a new config property named "spark.mesos.constraints.driver" for constraining drivers running on a Mesos cluster
How was this patch tested?
Corresponding unit test added also tested locally on a Mesos cluster
Please review http://spark.apache.org/contributing.html before opening a pull request.