Skip to content

Commit

Permalink
[SPARK-25901][CORE] Use only one thread in BarrierTaskContext compani…
Browse files Browse the repository at this point in the history
…on object

## What changes were proposed in this pull request?

Now we use only one `timer` (and thus a backing thread) in `BarrierTaskContext` companion object, and the objects can add `timerTasks` to that `timer`.

## How was this patch tested?

This was tested manually by generating logs and seeing that they look the same as ones before, namely, that is, a partition waiting on another partition for 5seconds generates 4-5 log messages when the frequency of logging is set to 1second.

Closes #22912 from yogeshg/thread.

Authored-by: Yogesh Garg <1059168+yogeshg@users.noreply.github.com>
Signed-off-by: Xingbo Jiang <xingbo.jiang@databricks.com>
  • Loading branch information
yogeshg authored and jiangxb1987 committed Nov 3, 2018
1 parent ed0c57e commit 0e318ac
Showing 1 changed file with 5 additions and 2 deletions.
7 changes: 5 additions & 2 deletions core/src/main/scala/org/apache/spark/BarrierTaskContext.scala
Expand Up @@ -41,14 +41,14 @@ import org.apache.spark.util._
class BarrierTaskContext private[spark] (
taskContext: TaskContext) extends TaskContext with Logging {

import BarrierTaskContext._

// Find the driver side RPCEndpointRef of the coordinator that handles all the barrier() calls.
private val barrierCoordinator: RpcEndpointRef = {
val env = SparkEnv.get
RpcUtils.makeDriverRef("barrierSync", env.conf, env.rpcEnv)
}

private val timer = new Timer("Barrier task timer for barrier() calls.")

// Local barrierEpoch that identify a barrier() call from current task, it shall be identical
// with the driver side epoch.
private var barrierEpoch = 0
Expand Down Expand Up @@ -234,4 +234,7 @@ object BarrierTaskContext {
@Experimental
@Since("2.4.0")
def get(): BarrierTaskContext = TaskContext.get().asInstanceOf[BarrierTaskContext]

private val timer = new Timer("Barrier task timer for barrier() calls.")

}

0 comments on commit 0e318ac

Please sign in to comment.