New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-4397] Move object RDD to the front of RDD.scala. #3580
Conversation
I ran into multiple cases that SBT/Scala compiler was confused by the implicits in continuous compilation mode. Moving them to the beginning of the file seemed to have fixed the problem.
Test build #24088 has started for PR 3580 at commit
|
Test build #24088 has finished for PR 3580 at commit
|
Test FAILed. |
} | ||
|
||
implicit def rddToOrderedRDDFunctions[K : Ordering : ClassTag, V: ClassTag](rdd: RDD[(K, V)]) | ||
: OrderedRDDFunctions[K, V] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be OrderedRDDFunctions[K, V, (K, V)]
Not sure if this is the correct fix. Here is the error information in my machine:
After I added the return types to all implicit methods, I could not reproduce this error in continuous compilation mode. |
Could you also add the return type for implicit def writableWritableConverter[T <: Writable](): WritableConverter[T] =
new WritableConverter[T](_.runtimeClass.asInstanceOf[Class[T]], _.asInstanceOf[T]) |
Conflicts: core/src/main/scala/org/apache/spark/SparkContext.scala
Alright pushed a new version. |
Test build #24106 has started for PR 3580 at commit
|
Test FAILed. |
Jenkins, retest this please. |
Test build #24111 has started for PR 3580 at commit
|
Test build #24106 has finished for PR 3580 at commit
|
Test PASSed. |
Test build #24111 has finished for PR 3580 at commit
|
Test PASSed. |
Could you test only adding the return types without moving the codes? If adding the return types works, I suggest not moving these codes. |
// compatibility and forward to the following functions directly. | ||
|
||
implicit def rddToPairRDDFunctions[K, V](rdd: RDD[(K, V)]) | ||
(implicit kt: ClassTag[K], vt: ClassTag[V], ord: Ordering[K] = null): PairRDDFunctions[K, V] = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you didn't preserve the indentation down there. was this on purpose?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea 100 char ...
LGTM, and +1 especially on adding return types to implicits. |
I moved it to the end. Looks like the problem is with implicit functions without explicit return types. cc @pwendell @mateiz @andrewor14 @JoshRosen in the future we should make sure all implicit functions have explicit return types. (And all public functions should have explicit return types defined as well) |
Test build #24132 has started for PR 3580 at commit
|
Test build #24132 has finished for PR 3580 at commit
|
Test PASSed. |
LGTM |
Good catch on the return types. Would be great if we can make ScalaStyle complain about those. |
I believe @adriaanm 's new linter can lint that (doesn't work in scalastyle right now). |
Merging in master. |
Yes, it's over at https://github.com/scala/scala-abide, by the way. (The fabled ImplicitMustNotHaveInferredType rule does not exist yet, though it would be easy to add.) |
Thanks - any ETA on the 1st release? |
Not yet. You can already use it, though. We want to have a good number of rules before we start advertising it. In the phase where we're looking for partners to build out the platform / flesh out the rule set right now. |
…Spark SQL As we learned in #3580, not explicitly typing implicit functions can lead to compiler bugs and potentially unexpected runtime behavior. Author: Reynold Xin <rxin@databricks.com> Closes #3859 from rxin/sql-implicits and squashes the following commits: 30c2c24 [Reynold Xin] [SPARK-5038] Add explicit return type for implicit functions in Spark SQL.
As we learned in #3580, not explicitly typing implicit functions can lead to compiler bugs and potentially unexpected runtime behavior. This is a follow up PR for rest of Spark (outside Spark SQL). The original PR for Spark SQL can be found at #3859 Author: Reynold Xin <rxin@databricks.com> Closes #3860 from rxin/implicit and squashes the following commits: 73702f9 [Reynold Xin] [SPARK-5038] Add explicit return type for implicit functions.
I ran into multiple cases that SBT/Scala compiler was confused by the implicits in continuous compilation mode. Adding explicit return types fixes the problem.