Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-1753] Warn about PySpark on YARN on Red Hat #682

Closed
wants to merge 3 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/building-with-maven.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,6 @@ The ScalaTest plugin also supports running only a specific test suite as follows

$ mvn -Dhadoop.version=... -Dsuites=org.apache.spark.repl.ReplSuite test


## Continuous Compilation ##

We use the scala-maven-plugin which supports incremental and continuous compilation. E.g.
Expand Down Expand Up @@ -129,6 +128,10 @@ Java 8 tests are run when -Pjava8-tests profile is enabled, they will run in spi
For these tests to run your system must have a JDK 8 installation.
If you have JDK 8 installed but it is not the system default, you can set JAVA_HOME to point to JDK 8 before running the tests.

## Building for PySpark on YARN ##
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to create a JIRA number for this and then reference it in the documentation.


There is a known problem with building an assembly jar for running PySpark on YARN on Red Hat based operating systems. If you wish to run PySpark on a YARN cluster with Red Hat installed, we recommend that you build the jar elsewhere, then ship it over to the cluster. We are investigating the exact cause for this.

## Packaging without Hadoop dependencies for deployment on YARN ##

The assembly jar produced by "mvn package" will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with yarn.application.classpath. The "hadoop-provided" profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
Expand Down