-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-1870] Make spark-submit --jars work in yarn-cluster mode. #848
Conversation
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
@@ -479,37 +485,24 @@ object ClientBase { | |||
|
|||
extraClassPath.foreach(addClasspathEntry) | |||
|
|||
addClasspathEntry(Environment.PWD.$()) | |||
val cachedSecondaryJarLinks = | |||
sparkConf.getOption(CONF_SPARK_YARN_SECONDARY_JARS).getOrElse("").split(",") | |||
// Normally the users app.jar is last in case conflicts with spark jars | |||
if (sparkConf.get("spark.yarn.user.classpath.first", "false").toBoolean) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's difference between spark.yarn.user.classpath.first
and spark.files.userClassPathFirst
? For me, it seems to be the same thing with two different configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS, in line 47, * 1. In standalone mode, it will launch an [[org.apache.spark.deploy.yarn.ApplicationMaster]]
should it be cluster mode now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spark.files.userClassPath
is a global configuration that controls the ordering of dynamically added jars, while spark.yarn.user.classpath.first
is only for YARN. I agree it is a little confusing, but this is independent of this PR. We can create a new JIRA for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will update the doc. Thanks!
Thanks. It looks great for me, and better than my patch. cachedSecondaryJarLinks.foreach(addPwdClasspathEntry) is not needed since we have This patch also works for me. |
The symbolic links may not be under the PWD. That is why it didn't work before. |
It works under driver before, so the major issue is those files are not in executor's distributed cache. But I like the idea to add them explicitly so we'll not miss anything. |
Yes, we can also control the ordering in this way. |
@dbtsai Could you backport the patch to branch-0.9 and test it on your cluster? |
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
… confliction apped $CWD/ and $CWD/* to the classpath remove unused methods
Merged build triggered. |
Merged build started. |
On standalone mode and Mesos, does this fix require the JARs to be accessible from the same URL on all nodes? |
Merged build finished. All automated tests passed. |
All automated tests passed. |
This doesn't apply to standalone or Mesos. For these two modes (and all others except yarn-cluster), Spark submit translates |
I independently tested this on Yarn 2.4 running in a VM where I could reproduce the problem. This change indeed allows Jars loaded with --jars to be accessible in executors. I am going to merge this. Thanks @mengxr for fixing this, and @andrewor14, @sryza and @dbtsai for helping out along the way! |
Sent secondary jars to distributed cache of all containers and add the cached jars to classpath before executors start. Tested on a YARN cluster (CDH-5.0). `spark-submit --jars` also works in standalone server and `yarn-client`. Thanks for @andrewor14 for testing! I removed "Doesn't work for drivers in standalone mode with "cluster" deploy mode." from `spark-submit`'s help message, though we haven't tested mesos yet. CC: @dbtsai @sryza Author: Xiangrui Meng <meng@databricks.com> Closes #848 from mengxr/yarn-classpath and squashes the following commits: 23e7df4 [Xiangrui Meng] rename spark.jar to __spark__.jar and app.jar to __app__.jar to avoid confliction apped $CWD/ and $CWD/* to the classpath remove unused methods a40f6ed [Xiangrui Meng] standalone -> cluster 65e04ad [Xiangrui Meng] update spark-submit help message and add a comment for yarn-client 11e5354 [Xiangrui Meng] minor changes 3e7e1c4 [Xiangrui Meng] use sparkConf instead of hadoop conf dc3c825 [Xiangrui Meng] add secondary jars to classpath in yarn (cherry picked from commit dba3140) Signed-off-by: Tathagata Das <tathagata.das1565@gmail.com>
@@ -326,8 +326,7 @@ private[spark] class SparkSubmitArguments(args: Seq[String]) { | |||
| --class CLASS_NAME Your application's main class (for Java / Scala apps). | |||
| --name NAME A name of your application. | |||
| --jars JARS Comma-separated list of local jars to include on the driver | |||
| and executor classpaths. Doesn't work for drivers in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there a reason for taking this out? My impression is that this still won't work on standalone with cluster deploy mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not have been taken out actually. It can be put back in. But we found out just now that the "cluster mode" of Spark Standalone cluster is sort of semi-broken with spark submit.
Sent secondary jars to distributed cache of all containers and add the cached jars to classpath before executors start. Tested on a YARN cluster (CDH-5.0). `spark-submit --jars` also works in standalone server and `yarn-client`. Thanks for @andrewor14 for testing! I removed "Doesn't work for drivers in standalone mode with "cluster" deploy mode." from `spark-submit`'s help message, though we haven't tested mesos yet. CC: @dbtsai @sryza Author: Xiangrui Meng <meng@databricks.com> Closes apache#848 from mengxr/yarn-classpath and squashes the following commits: 23e7df4 [Xiangrui Meng] rename spark.jar to __spark__.jar and app.jar to __app__.jar to avoid confliction apped $CWD/ and $CWD/* to the classpath remove unused methods a40f6ed [Xiangrui Meng] standalone -> cluster 65e04ad [Xiangrui Meng] update spark-submit help message and add a comment for yarn-client 11e5354 [Xiangrui Meng] minor changes 3e7e1c4 [Xiangrui Meng] use sparkConf instead of hadoop conf dc3c825 [Xiangrui Meng] add secondary jars to classpath in yarn
Co-authored-by: Egor Krivokon <>
Co-authored-by: Egor Krivokon <>
Sent secondary jars to distributed cache of all containers and add the cached jars to classpath before executors start. Tested on a YARN cluster (CDH-5.0).
spark-submit --jars
also works in standalone server andyarn-client
. Thanks for @andrewor14 for testing!I removed "Doesn't work for drivers in standalone mode with "cluster" deploy mode." from
spark-submit
's help message, though we haven't tested mesos yet.CC: @dbtsai @sryza