-
Notifications
You must be signed in to change notification settings - Fork 28.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SPARK-5136 [DOCS] Improve documentation around setting up Spark IntelliJ project #3952
Conversation
Test build #25225 has started for PR 3952 at commit
|
Test build #25225 has finished for PR 3952 at commit
|
Test FAILed. |
@@ -153,7 +153,8 @@ Thus, the full flow for running continuous-compilation of the `core` submodule m | |||
|
|||
# Using With IntelliJ IDEA | |||
|
|||
This setup works fine in IntelliJ IDEA 11.1.4. After opening the project via the pom.xml file in the project root folder, you only need to activate either the hadoop1 or hadoop2 profile in the "Maven Properties" popout. We have not tried Eclipse/Scala IDE with this. | |||
For additional help in setting up IntelliJ IDEA for Spark development, refer to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/additional //
?
lgtm, proposed wiki changes also look great, thanks Sean! |
@@ -153,7 +153,8 @@ Thus, the full flow for running continuous-compilation of the `core` submodule m | |||
|
|||
# Using With IntelliJ IDEA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually changed the wiki entry a bit. Maybe we could have a header that says
# Building Spark with IntelliJ IDEA or Eclipse
And it could refer to our IDE section:
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-IDESetup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pwendell Done. The other half of this change is then updating the wiki with some additional information. You can see my proposed text above and the tidbits I think should be on the wiki. Feel free to integrate whatever of it you see fit, however you see fit.
Test build #25321 has started for PR 3952 at commit
|
Test build #25321 has finished for PR 3952 at commit
|
Test PASSed. |
Looks great, thanks Sean. |
…liJ project This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in the docs. The intent however is to also update the wiki page with updated tips. This is the text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing instructions on the wiki, about enabling Hive, but I think those are actually optional. ------ IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import Spark as a Maven project. Choose "Import Project..." from the File menu, and select the `pom.xml` file in the Spark root directory. It is fine to leave all settings at their default values in the Maven import wizard, with two caveats. First, it is usually useful to enable "Import Maven projects automatically", sincchanges to the project structure will automatically update the IntelliJ project. Second, note the step that prompts you to choose active Maven build profiles. As documented above, some build configuration require specific profiles to be enabled. The same profiles that are enabled with `-P[profile name]` above may be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN support, enable profiles `yarn` and `hadoop-2.4`. These selections can be changed later by accessing the "Maven Projects" tool window from the View menu, and expanding the Profiles section. "Rebuild Project" can fail the first time the project is compiled, because generate source files are not automatically generated. Try clicking the "Generate Sources and Update Folders For All Projects" button in the "Maven Projects" tool window to manually generate these sources. Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar". If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the "Additional compiler options" field. It will work then although the option will come back when the project reimports. Author: Sean Owen <sowen@cloudera.com> Closes #3952 from srowen/SPARK-5136 and squashes the following commits: f3baa66 [Sean Owen] Point to new IJ / Eclipse wiki link 016b7df [Sean Owen] Point to IntelliJ wiki page instead of also including IntelliJ notes in the docs (cherry picked from commit 547df97) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
Thanks @pwendell -- would you be able to update the wiki too? I don't have edit rights. You can see some proposed text above for the wiki. Do what you like with it, but the last three paragraphs are new points that should probably be added, given questions of the past few days. (Happy to do my best with edits if you can give me wiki edit permission too.) |
I merged my proposed wiki edits into the wiki page. I hope that's OK. |
This PR simply points to the IntelliJ wiki page instead of also including IntelliJ notes in the docs. The intent however is to also update the wiki page with updated tips. This is the text I propose for the IntelliJ section on the wiki. I realize it omits some of the existing instructions on the wiki, about enabling Hive, but I think those are actually optional.
IntelliJ supports both Maven- and SBT-based projects. It is recommended, however, to import Spark as a Maven project. Choose "Import Project..." from the File menu, and select the
pom.xml
file in the Spark root directory.It is fine to leave all settings at their default values in the Maven import wizard, with two caveats. First, it is usually useful to enable "Import Maven projects automatically", sincchanges to the project structure will automatically update the IntelliJ project.
Second, note the step that prompts you to choose active Maven build profiles. As documented above, some build configuration require specific profiles to be enabled. The same profiles that are enabled with
-P[profile name]
above may be enabled on this screen. For example, if developing for Hadoop 2.4 with YARN support, enable profilesyarn
andhadoop-2.4
.These selections can be changed later by accessing the "Maven Projects" tool window from the View menu, and expanding the Profiles section.
"Rebuild Project" can fail the first time the project is compiled, because generate source files are not automatically generated. Try clicking the "Generate Sources and Update Folders For All Projects" button in the "Maven Projects" tool window to manually generate these sources.
Compilation may fail with an error like "scalac: bad option: -P:/home/jakub/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar". If so, go to Preferences > Build, Execution, Deployment > Scala Compiler and clear the "Additional compiler options" field. It will work then although the option will come back when the project reimports.