-
Notifications
You must be signed in to change notification settings - Fork 859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark version compatibility and compilation error for Spark 2.0.1 #201
Comments
@tkakantousis, the Spark fetcher is undergoing some major refactoring and we haven't tested it against all the Spark versions. In your case, can you try incrementing the jacksonVersion to 2.5.4 in the Dependencies.scala and then retry? https://github.com/linkedin/dr-elephant/blob/master/project/Dependencies.scala#L28 |
I met the same problem, hadoop2.7.3 and spark 2.0.2, there is no 2.5.4...see mvn repo |
I solve the problem by modify |
@suiyuan2009, great! Thanks for posting your solution. |
A related question: is there a reason to use older sbt? |
+1 hadoop 2.7.3/spark 2.1.0 (HDP 2.6) even with sbt 0.13.9 could not complete compilation for me |
There are several issues that report compilation errors with a message like: [error] impossible to get artifacts when data has not been loaded. IvyNode = com.fasterxml.jackson.core#jackson-databind;2.5.4 java.lang.IllegalStateException: impossible to get artifacts when data has not been loaded. IvyNode = com.fasterxml.jackson.core#jackson-databind;2.5.4 Examples: - linkedin#201 - linkedin#339 - linkedin#367 - linkedin#419 - linkedin#658 It looks like this message stems from a bug in Ivy which has since been fixed (sbt/sbt#1598). I'm guessing the fix in Ivy is included in sbt v0.13.9, because uprading sbt fixed the issue for me, and seems to have helped others, too (e.g., linkedin#201 (comment)).
Hi all,
I'm trying compile latest master branch with spark 2.0.1 and hadoop 2.7.3 and I'm getting the following error:
Setting hadoop 2.4.0 and spark 1.6.1. compiles fine though. Could someone point me to documentation stating which spark and hadoop versions are supported? Should I manually set the dependencies and if so how?
Thanks!
The text was updated successfully, but these errors were encountered: