Skip to content

Commit

Permalink
[STREAMING][TEST] Fix flaky streaming.FailureSuite
Browse files Browse the repository at this point in the history
Under some corner cases, the test suite failed to shutdown the SparkContext causing cascaded failures. This fix does two things
- Makes sure no SparkContext is active after every test
- Makes sure StreamingContext is always shutdown (prevents leaking of StreamingContexts as well, just in case)

Author: Tathagata Das <tathagata.das1565@gmail.com>

Closes #11166 from tdas/fix-failuresuite.
  • Loading branch information
tdas authored and zsxwing committed Feb 11, 2016
1 parent 13c17cb commit 219a74a
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import java.io.File

import org.scalatest.BeforeAndAfter

import org.apache.spark.{Logging, SparkFunSuite}
import org.apache.spark._
import org.apache.spark.util.Utils

/**
Expand All @@ -43,6 +43,9 @@ class FailureSuite extends SparkFunSuite with BeforeAndAfter with Logging {
Utils.deleteRecursively(directory)
}
StreamingContext.getActive().foreach { _.stop() }

// Stop SparkContext if active
SparkContext.getOrCreate(new SparkConf().setMaster("local").setAppName("bla")).stop()
}

test("multiple failures with map") {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,8 @@ object MasterFailureTest extends Logging {
}
} catch {
case e: Exception => logError("Error running streaming context", e)
} finally {
ssc.stop()
}
if (killingThread.isAlive) {
killingThread.interrupt()
Expand All @@ -250,7 +252,6 @@ object MasterFailureTest extends Logging {
// to null after the next test creates the new SparkContext and fail the test.
killingThread.join()
}
ssc.stop()

logInfo("Has been killed = " + killed)
logInfo("Is last output generated = " + isLastOutputGenerated)
Expand Down

0 comments on commit 219a74a

Please sign in to comment.