Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-2737] Add retag() method for changing RDDs' ClassTags. #1639

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
22 changes: 22 additions & 0 deletions core/src/main/scala/org/apache/spark/rdd/RDD.scala
Expand Up @@ -1239,6 +1239,28 @@ abstract class RDD[T: ClassTag](
/** The [[org.apache.spark.SparkContext]] that this RDD was created on. */
def context = sc

/**
* Private API for changing an RDD's ClassTag.
* Used for internal Java <-> Scala API compatibility.
*/
private[spark] def retag(cls: Class[T]): RDD[T] = {
val classTag: ClassTag[T] = ClassTag.apply(cls)
this.retag(classTag)
}

/**
* Private API for changing an RDD's ClassTag.
* Used for internal Java <-> Scala API compatibility.
*/
private[spark] def retag(classTag: ClassTag[T]): RDD[T] = {
val oldRDD = this
new RDD[T](sc, Seq(new OneToOneDependency(this)))(classTag) {
override protected def getPartitions: Array[Partition] = oldRDD.getPartitions
override def compute(split: Partition, context: TaskContext): Iterator[T] =
oldRDD.compute(split, context)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need to preserve the Partitioner and such. It would be better to do this via this.mapPartitions with the preservePartitioning option set to true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would there be any performance impact of running mapPartitions(identity, preservesPartitioning = true)(classTag)? If we have an RDD that's persisted in a serialized format, wouldn't this extra map force an unnecessary deserialization?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the fix with just passing the partitioner also works.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually compute just works at the iterator level, so I don't think mapPartitions would hurt. All you do is pass through the parent's iterator. When you call compute() you're already deserializing the RDD, this won't create extra work.

}

// Avoid handling doCheckpoint multiple times to prevent excessive recursion
@transient private var doCheckpointCalled = false

Expand Down
17 changes: 17 additions & 0 deletions core/src/test/java/org/apache/spark/JavaAPISuite.java
Expand Up @@ -1245,4 +1245,21 @@ public Tuple2<Integer, Integer> call(Integer i) {
Assert.assertTrue(worExactCounts.get(0) == 2);
Assert.assertTrue(worExactCounts.get(1) == 4);
}

private static class SomeCustomClass implements Serializable {
public SomeCustomClass() {
// Intentionally left blank
}
}

@Test
public void collectUnderlyingScalaRDD() {
List<SomeCustomClass> data = new ArrayList<SomeCustomClass>();
for (int i = 0; i < 100; i++) {
data.add(new SomeCustomClass());
}
JavaRDD<SomeCustomClass> rdd = sc.parallelize(data);
SomeCustomClass[] collected = (SomeCustomClass[]) rdd.rdd().retag(SomeCustomClass.class).collect();
Assert.assertEquals(data.size(), collected.length);
}
}