Skip to content

Commit

Permalink
Added an optimized count to VertexSetRDD.
Browse files Browse the repository at this point in the history
  • Loading branch information
rxin committed Nov 30, 2013
1 parent 689f757 commit b30e0ae
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 2 deletions.
2 changes: 0 additions & 2 deletions graph/src/main/scala/org/apache/spark/graph/Pregel.scala
Expand Up @@ -139,8 +139,6 @@ object Pregel {
* @param initialMsg the message each vertex will receive at the on
* the first iteration.
*
* @param numIter the number of iterations to run this computation.
*
* @param vprog the user-defined vertex program which runs on each
* vertex and receives the inbound message and computes a new vertex
* value. On the first iteration the vertex program is invoked on
Expand Down
Expand Up @@ -101,6 +101,11 @@ class VertexSetRDD[@specialized VD: ClassManifest](
/** Persist this RDD with the default storage level (`MEMORY_ONLY`). */
override def cache(): VertexSetRDD[VD] = persist()

/** Return the number of vertices in this set. */
override def count(): Long = {
partitionsRDD.map(_.size).reduce(_ + _)
}

/**
* Provide the `RDD[(Vid, VD)]` equivalent output.
*/
Expand Down
Expand Up @@ -42,6 +42,8 @@ class VertexPartition[@specialized(Long, Int, Double) VD: ClassManifest](

val capacity: Int = index.capacity

def size: Int = mask.cardinality

/**
* Pass each vertex attribute along with the vertex id through a map
* function and retain the original RDD's partitioning and index.
Expand Down

0 comments on commit b30e0ae

Please sign in to comment.