Skip to content

resource metadata is now sorted by key #307

resource metadata is now sorted by key

resource metadata is now sorted by key #307

GitHub Actions / Unit Test Results failed Feb 8, 2024 in 0s

1 fail, 76 skipped, 667 pass in 24m 39s

744 tests  ±0   667 ✔️ ±0   24m 39s ⏱️ +47s
  92 suites ±0     76 💤 ±0 
  92 files   ±0       1 ±0 

Results for commit d8d7729. ± Comparison against earlier commit 5800f96.

Annotations

Check warning on line 0 in com.tribbloids.spookystuff.execution.ExplorePlanSpec

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

should work on directory, range: from -1 (com.tribbloids.spookystuff.execution.ExplorePlanSpec) failed

parent/core/build/test-results/test/TEST-com.tribbloids.spookystuff.execution.ExplorePlanSpec.xml
Raw output
java.lang.AssertionError: assertion failed: <
=============================== [ACTUAL   /  LEFT] ================================

[/tmp/spookystuff/resources/testutils/dir,null,null,null,null]
[/tmp/spookystuff/resources/testutils/dir,0,ArraySeq(0),ArraySeq(table.csv),file:///tmp/spookystuff/resources/testutils/dir/table.csv]
[/tmp/spookystuff/resources/testutils/dir,0,ArraySeq(1),ArraySeq(hivetable.csv),file:///tmp/spookystuff/resources/testutils/dir/hivetable.csv]
[/tmp/spookystuff/resources/testutils/dir,1,ArraySeq(0, 0),ArraySeq(table.csv, Test.pdf),file:///tmp/spookystuff/resources/testutils/dir/dir/Test.pdf]
[/tmp/spookystuff/resources/testutils/dir,2,ArraySeq(0, 0, 0),ArraySeq(table.csv, Test.pdf, pom.xml),file:///tmp/spookystuff/resources/testutils/dir/dir/dir/pom.xml]
[/tmp/spookystuff/resources/testutils/dir,3,ArraySeq(0, 0, 0, 0),ArraySeq(table.csv, Test.pdf, pom.xml, tribbloid.json),file:///tmp/spookystuff/resources/testutils/dir/dir/dir/dir/tribbloid.json]
> did not equal <
=============================== [EXPECTED / RIGHT] ================================

[/tmp/spookystuff/resources/testutils/dir,null,null,null,null]
[/tmp/spookystuff/resources/testutils/dir,0,ArraySeq(0),ArraySeq(hivetable.csv),file:///tmp/spookystuff/resources/testutils/dir/hivetable.csv]
[/tmp/spookystuff/resources/testutils/dir,0,ArraySeq(1),ArraySeq(table.csv),file:///tmp/spookystuff/resources/testutils/dir/table.csv]
[/tmp/spookystuff/resources/testutils/dir,1,ArraySeq(0, 0),ArraySeq(hivetable.csv, Test.pdf),file:///tmp/spookystuff/resources/testutils/dir/dir/Test.pdf]
[/tmp/spookystuff/resources/testutils/dir,2,ArraySeq(0, 0, 0),ArraySeq(hivetable.csv, Test.pdf, pom.xml),file:///tmp/spookystuff/resources/testutils/dir/dir/dir/pom.xml]
[/tmp/spookystuff/resources/testutils/dir,3,ArraySeq(0, 0, 0, 0),ArraySeq(hivetable.csv, Test.pdf, pom.xml, tribbloid.json),file:///tmp/spookystuff/resources/testutils/dir/dir/dir/dir/tribbloid.json]
>
	at scala.Predef$.assert(Predef.scala:279)
	at ai.acyclic.prover.commons.diff.StringDiff.$anonfun$assert$default$3$1(StringDiff.scala:138)
	at ai.acyclic.prover.commons.diff.StringDiff.$anonfun$assert$default$3$1$adapted(StringDiff.scala:138)
	at ai.acyclic.prover.commons.diff.StringDiff.assertEqual$1(StringDiff.scala:161)
	at ai.acyclic.prover.commons.diff.StringDiff.assert(StringDiff.scala:177)
	at ai.acyclic.prover.commons.testlib.BaseSpec$_StringOps.shouldBe(BaseSpec.scala:30)
	at com.tribbloids.spookystuff.execution.ExplorePlanSpec.$anonfun$new$8(ExplorePlanSpec.scala:122)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funspec.AnyFunSpecLike$$anon$1.apply(AnyFunSpecLike.scala:521)
	at org.scalatest.TestSuite.withFixture(TestSuite.scala:196)
	at org.scalatest.TestSuite.withFixture$(TestSuite.scala:195)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.withFixture(SpookyBaseSpec.scala:116)
	at org.scalatest.funspec.AnyFunSpecLike.invokeWithFixture$1(AnyFunSpecLike.scala:519)
	at org.scalatest.funspec.AnyFunSpecLike.$anonfun$runTest$1(AnyFunSpecLike.scala:531)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funspec.AnyFunSpecLike.runTest(AnyFunSpecLike.scala:531)
	at org.scalatest.funspec.AnyFunSpecLike.runTest$(AnyFunSpecLike.scala:513)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.com$tribbloids$spookystuff$testutils$SparkUISupport$$super$runTest(SpookyBaseSpec.scala:92)
	at com.tribbloids.spookystuff.testutils.SparkUISupport.$anonfun$runTest$1(SparkUISupport.scala:16)
	at ai.acyclic.prover.commons.spark.SparkContextView.withJob(SparkContextView.scala:50)
	at com.tribbloids.spookystuff.testutils.SparkUISupport.runTest(SparkUISupport.scala:16)
	at com.tribbloids.spookystuff.testutils.SparkUISupport.runTest$(SparkUISupport.scala:10)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.org$scalatest$BeforeAndAfterEach$$super$runTest(SpookyBaseSpec.scala:92)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.runTest(SpookyBaseSpec.scala:92)
	at org.scalatest.funspec.AnyFunSpecLike.$anonfun$runTests$1(AnyFunSpecLike.scala:564)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:333)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:390)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:427)
	at scala.collection.immutable.List.foreach(List.scala:333)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funspec.AnyFunSpecLike.runTests(AnyFunSpecLike.scala:564)
	at org.scalatest.funspec.AnyFunSpecLike.runTests$(AnyFunSpecLike.scala:563)
	at org.scalatest.funspec.AnyFunSpec.runTests(AnyFunSpec.scala:1632)
	at org.scalatest.Suite.run(Suite.scala:1114)
	at org.scalatest.Suite.run$(Suite.scala:1096)
	at org.scalatest.funspec.AnyFunSpec.org$scalatest$funspec$AnyFunSpecLike$$super$run(AnyFunSpec.scala:1632)
	at org.scalatest.funspec.AnyFunSpecLike.$anonfun$run$1(AnyFunSpecLike.scala:568)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funspec.AnyFunSpecLike.run(AnyFunSpecLike.scala:568)
	at org.scalatest.funspec.AnyFunSpecLike.run$(AnyFunSpecLike.scala:567)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.org$scalatest$BeforeAndAfterAll$$super$run(SpookyBaseSpec.scala:92)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at com.tribbloids.spookystuff.testutils.SpookyBaseSpec.run(SpookyBaseSpec.scala:92)
	at co.helmethair.scalatest.runtime.Executor.runScalatests(Executor.java:130)
	at co.helmethair.scalatest.runtime.Executor.executeSuite(Executor.java:86)
	at co.helmethair.scalatest.runtime.Executor.executeTest(Executor.java:53)
	at co.helmethair.scalatest.runtime.Executor.lambda$executeSuite$2(Executor.java:82)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.stream.SortedOps$SizedRefSortingSink.end(SortedOps.java:357)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
	at co.helmethair.scalatest.runtime.Executor.executeSuite(Executor.java:82)
	at co.helmethair.scalatest.runtime.Executor.executeTest(Executor.java:60)
	at co.helmethair.scalatest.ScalatestEngine.execute(ScalatestEngine.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.processAllTestClasses(JUnitPlatformTestClassProcessor.java:119)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.access$000(JUnitPlatformTestClassProcessor.java:94)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor.stop(JUnitPlatformTestClassProcessor.java:89)
	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:62)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
	at com.sun.proxy.$Proxy2.stop(Unknown Source)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker$3.run(TestWorker.java:193)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.executeAndMaintainThreadName(TestWorker.java:129)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:100)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.execute(TestWorker.java:60)
	at org.gradle.process.internal.worker.child.ActionExecutionWorker.execute(ActionExecutionWorker.java:56)
	at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:113)
	at org.gradle.process.internal.worker.child.SystemApplicationClassLoaderWorker.call(SystemApplicationClassLoaderWorker.java:65)
	at worker.org.gradle.process.internal.worker.GradleWorkerMain.run(GradleWorkerMain.java:69)
	at worker.org.gradle.process.internal.worker.GradleWorkerMain.main(GradleWorkerMain.java:74)

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

76 skipped tests found

There are 76 skipped tests, see "Raw output" for the full list of skipped tests.
Raw output
com.tribbloids.spookystuff.WaitBeforeAppExitSpike ‑ wait for closing
com.tribbloids.spookystuff.assembly.JarHellDetection ‑ jars conflict
com.tribbloids.spookystuff.doc.TestUnstructured ‑ Unstructured is serializable for td
com.tribbloids.spookystuff.doc.TestUnstructured ‑ Unstructured is serializable for tr
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ toString
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ Performance test: Java reflection should be faster than ScalaReflection
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Action.dryrun
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Array[String].length
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Doc.uri
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Fetched.timestamp
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve String.concat(String)
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Unstructured.code
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve a defined class method
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve a defined class method that has option return type
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve function of String.startsWith(String) using Java
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve type of List[String].head
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve type of Seq[String].head
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ cannot resolve function when arg type is NULL
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ cannot resolve function when base type is NULL
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should return None if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should return None if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should work on function with option parameter
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object <: class with finalizer
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a cleaner
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a phantom reference cleanup thread
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a weak reference cleanup thread
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object with finalizer
com.tribbloids.spookystuff.parsing.ParsersBenchmark ‑ replace N
com.tribbloids.spookystuff.relay.RelaySuite ‑ SerializingParam[Function1] should work
com.tribbloids.spookystuff.spike.SlowRDDSpike ‑ RDD
com.tribbloids.spookystuff.spike.SlowRDDSpike ‑ is repartitioning non-blocking? dataset
com.tribbloids.spookystuff.testutils.BaseSpecSpike ‑ diff in IDE
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ ... even if the RDD is not Serializable
com.tribbloids.spookystuff.utils.io.HDFSResolverSpike ‑ HDFSResolver can read from FTP server
com.tribbloids.spookystuff.utils.io.HDFSResolverSpike ‑ low level test case
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential access ... even for non existing path
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential read and write to non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ move different files to the same target should be sequential
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ touch should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential access ... even for non existing path
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential read and write to non-existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ move different files to the same target should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ touch should be sequential
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ each partition of the first operand of cogroup should not move, but elements are shuffled
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be Serializable with equality -
com.tribbloids.spookystuff.utils.serialization.BeforeAndAfterShippingSpec ‑ can serialize self
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> Union of arity 3
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> case class with Option[Union] of arity 3
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> case class with Union of arity 3
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ missing member to default constructor value
org.apache.spark.ml.dsl.utils.data.Json4sSpike ‑ encode/decode ListMap
org.apache.spark.ml.dsl.utils.data.Json4sSpike ‑ encode/decode Path
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ TypeTag from Type can be serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can create ClassTag for Array[T]
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can get TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can get another TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can reflect anon class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can reflect lambda
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ scala reflection can be used to get type of Array[String].headOption
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[(String, Int)] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[(String, Int)]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Double]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Int]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[String]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Double] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Int] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[(String, Int)]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[String]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[String] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart.type] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.User] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[java.sql.Timestamp] ... even if created from raw Type

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

744 tests found (test 1 to 613)

There are 744 tests, see "Raw output" for the list of tests 1 to 613.
Raw output
com.tribbloids.spookystuff.Python3DriverSuite ‑ CommonUtils.withDeadline can interrupt python execution that blocks indefinitely
com.tribbloids.spookystuff.Python3DriverSuite ‑ call should return None if result variable is undefined
com.tribbloids.spookystuff.Python3DriverSuite ‑ can use the correct python version
com.tribbloids.spookystuff.Python3DriverSuite ‑ clean() won't be blocked indefinitely by ongoing python execution
com.tribbloids.spookystuff.Python3DriverSuite ‑ interpret should throw an exception if interpreter raises a multi-line error
com.tribbloids.spookystuff.Python3DriverSuite ‑ interpret should throw an exception if interpreter raises a syntax error
com.tribbloids.spookystuff.Python3DriverSuite ‑ interpret should throw an exception if interpreter raises an error
com.tribbloids.spookystuff.Python3DriverSuite ‑ interpret should yield 1 row for a single print
com.tribbloids.spookystuff.Python3DriverSuite ‑ sendAndGetResult should work if interpretation triggers an error
com.tribbloids.spookystuff.Python3DriverSuite ‑ sendAndGetResult should work in multiple threads
com.tribbloids.spookystuff.Python3DriverSuite ‑ sendAndGetResult should work in single thread
com.tribbloids.spookystuff.SpookyContextSpec ‑ SpookyContext should be Serializable
com.tribbloids.spookystuff.SpookyContextSpec ‑ SpookyContext.dsl should be Serializable
com.tribbloids.spookystuff.SpookyContextSpec ‑ can create PageRow from String
com.tribbloids.spookystuff.SpookyContextSpec ‑ can create PageRow from map[Int, String]
com.tribbloids.spookystuff.SpookyContextSpec ‑ can create PageRow from map[String, String]
com.tribbloids.spookystuff.SpookyContextSpec ‑ can create PageRow from map[Symbol, String]
com.tribbloids.spookystuff.SpookyContextSpec ‑ default SpookyContext should have default dir configs
com.tribbloids.spookystuff.SpookyContextSpec ‑ derived instances of a SpookyContext should have the same configuration
com.tribbloids.spookystuff.SpookyContextSpec ‑ derived instances of a SpookyContext should have the same configuration after it has been modified
com.tribbloids.spookystuff.SpookyContextSpec ‑ each execution should have independent metrics if sharedMetrics=false
com.tribbloids.spookystuff.SpookyContextSpec ‑ each execution should have shared metrics if sharedMetrics=true
com.tribbloids.spookystuff.SpookyContextSpec ‑ when sharedMetrics=false, new SpookyContext created from default SpookyConf should have default dir configs
com.tribbloids.spookystuff.SpookyContextSpec ‑ when sharedMetrics=true, new SpookyContext created from default SpookyConf should have default dir configs
com.tribbloids.spookystuff.SpookyExceptionSuite ‑ DFSReadException .getMessage contains causes
com.tribbloids.spookystuff.WaitBeforeAppExitSpike ‑ wait for closing
com.tribbloids.spookystuff.actions.ActionSuite ‑ Timed mixin can terminate execution if it takes too long
com.tribbloids.spookystuff.actions.ActionSuite ‑ Wget -> JSON
com.tribbloids.spookystuff.actions.ActionSuite ‑ Wget -> treeText
com.tribbloids.spookystuff.actions.ActionSuite ‑ interpolate should not change name
com.tribbloids.spookystuff.actions.ActionSuite ‑ interpolate should not change timeout
com.tribbloids.spookystuff.actions.BlockSpec ‑ Try(Wget) can failsafe on malformed uri
com.tribbloids.spookystuff.actions.BlockSpec ‑ loop without export won't need driver
com.tribbloids.spookystuff.actions.BlockSpec ‑ try without export won't need driver
com.tribbloids.spookystuff.actions.BlockSpec ‑ wayback time of loop should be identical to its last child supporting wayback
com.tribbloids.spookystuff.actions.WgetOAuthSpec ‑ output of wget should not include session's backtrace
com.tribbloids.spookystuff.actions.WgetOAuthSpec ‑ wget should encode malformed url
com.tribbloids.spookystuff.actions.WgetOAuthSpec ‑ wget.interpolate should not overwrite each other
com.tribbloids.spookystuff.actions.WgetSpec ‑ output of wget should not include session's backtrace
com.tribbloids.spookystuff.actions.WgetSpec ‑ wget should encode malformed url
com.tribbloids.spookystuff.actions.WgetSpec ‑ wget.interpolate should not overwrite each other
com.tribbloids.spookystuff.assembly.JarHellDetection ‑ jars conflict
com.tribbloids.spookystuff.conf.SpookyConfSuite ‑ DirConf.import can read from SparkConf
com.tribbloids.spookystuff.conf.SpookyConfSuite ‑ DirConf.import is serializable
com.tribbloids.spookystuff.conf.SpookyConfSuite ‑ SpookyConf is serializable
com.tribbloids.spookystuff.conf.SpookyConfSuite ‑ getProperty() can load property from spark property
com.tribbloids.spookystuff.conf.SpookyConfSuite ‑ getProperty() can load property from system property
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ childrenWithSiblings
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ childrenWithSiblings with overlapping elimiation
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget csv, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget dir, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget html, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget image, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget json, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget pdf, save and load
com.tribbloids.spookystuff.doc.TestPageFromAbsoluteFile ‑ wget xml, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ childrenWithSiblings
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ childrenWithSiblings with overlapping elimiation
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget csv, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget dir, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget html, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget image, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget json, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget pdf, save and load
com.tribbloids.spookystuff.doc.TestPageFromFile ‑ wget xml, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ childrenWithSiblings
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ childrenWithSiblings with overlapping elimiation
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget csv, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget html, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget image, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget json, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget pdf, save and load
com.tribbloids.spookystuff.doc.TestPageFromHttp ‑ wget xml, save and load
com.tribbloids.spookystuff.doc.TestUnstructured ‑ Unstructured is serializable for div
com.tribbloids.spookystuff.doc.TestUnstructured ‑ Unstructured is serializable for td
com.tribbloids.spookystuff.doc.TestUnstructured ‑ Unstructured is serializable for tr
com.tribbloids.spookystuff.doc.TestUnstructured ‑ attrs should handles empty attributes properly
com.tribbloids.spookystuff.dsl.GenPartitionerSuite ‑ DocCacheAware can co-partition 2 RDDs
com.tribbloids.spookystuff.dsl.TestDSL ‑ SpookyContext can be cast to a blank RDD with empty schema
com.tribbloids.spookystuff.dsl.TestDSL ‑ andFn
com.tribbloids.spookystuff.dsl.TestDSL ‑ andUnlift
com.tribbloids.spookystuff.dsl.TestDSL ‑ defaultAs should not rename an Alias
com.tribbloids.spookystuff.dsl.TestDSL ‑ double quotes in selector by attribute should work
com.tribbloids.spookystuff.dsl.TestDSL ‑ string interpolation
com.tribbloids.spookystuff.dsl.TestDSL ‑ symbol as Expr
com.tribbloids.spookystuff.dsl.TestDSL ‑ uri
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on collection
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on collection if not manually set alias
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on collection if overwriting defaultJoinField
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on extracted List
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on extracted Seq
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on extracted array
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ FlattenPlan should work on partial collection
com.tribbloids.spookystuff.execution.ExplodeDataPlanSpec ‑ fork is equivalent to explode if not manually set join key
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ When using custom keyBy function, explore plan can avoid fetching traces with identical Trace and preserve keyBy in its output
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should avoid shuffling the latest batch to minimize repeated fetch
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should create a new beaconRDD if its upstream doesn't have one
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should inherit old beaconRDD from upstream if exists
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should throw an exception if OrdinalField == DepthField
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should work on directory, range: 0 to 2
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should work on directory, range: 2 to 2
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should work on directory, range: from -1
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should work on directory, range: from 0
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ should work on directory, range: from 2
com.tribbloids.spookystuff.execution.ExplorePlanSpec ‑ toString
com.tribbloids.spookystuff.execution.SchemaContextSpec ‑ Resolver should not scramble sequence of fields
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ ExtractPlan can append to old values using ~+ operator
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ ExtractPlan can assign aliases to unnamed fields
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ ExtractPlan can erase old values that has a different DataType using ~+ operator
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ ExtractPlan can overwrite old values using ! postfix
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ ExtractPlan cannot partially overwrite old values with the same field id but different DataType
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ In ExtractPlan, weak values are cleaned in case of a conflict
com.tribbloids.spookystuff.execution.TestExtractPlan ‑ In ExtractPlan, weak values are not cleaned if being overwritten using ~! operator
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ FetchPlan should be serializable
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ FetchPlan should create a new beaconRDD if its upstream doesn't have one
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ FetchPlan should inherit old beaconRDD from upstream if exists
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ FetchPlan.toString should work
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ fetch() + count() will fetch once
com.tribbloids.spookystuff.execution.TestFetchPlan ‑ fetch() + select() + count() will fetch once
com.tribbloids.spookystuff.extractors.ColSuite ‑ Col(Lit) ==
com.tribbloids.spookystuff.extractors.ColSuite ‑ Col(Lit).toString
com.tribbloids.spookystuff.extractors.ColSuite ‑ Col(Lit).value
com.tribbloids.spookystuff.extractors.ColSuite ‑ Col(Symbol).toString
com.tribbloids.spookystuff.extractors.ExtractorSuite ‑ Literal -> JSON
com.tribbloids.spookystuff.extractors.ExtractorSuite ‑ Literal.toString
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromFn all its resolved functions are serializable
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromFn apply won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromFn applyOrElse won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromFn isDefined won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromFn lift.apply won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromOptionFn all its resolved functions are serializable
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromOptionFn apply won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromOptionFn applyOrElse won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromOptionFn isDefined won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ fromOptionFn lift.apply won't execute twice
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ spike: is serializable Some(str2Int)
com.tribbloids.spookystuff.extractors.GenExtractorSuite ‑ spike: is serializable Some(str2IntLifted)
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ Performance test: Java reflection should be faster than ScalaReflection
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Action.dryrun
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Array[String].length
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Doc.uri
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Fetched.timestamp
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve String.concat(String)
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve Unstructured.code
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve a defined class method
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve a defined class method that has option return type
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve function of String.startsWith(String) using Java
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve type of List[String].head
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ can resolve type of Seq[String].head
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ cannot resolve function when arg type is NULL
com.tribbloids.spookystuff.extractors.ScalaDynamicExtractorSuite ‑ cannot resolve function when base type is NULL
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should return None if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should work on function with option output
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByJava should work on overloaded function
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should throw error if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should work on function with option output
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodByScala should work on overloaded function
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodsByName should work on case constructor parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodsByName should work on function with default parameters
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodsByName should work on lazy val property
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodsByName should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike ‑ getMethodsByName should work on overloaded function
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should return None if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should work on function with option output
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByJava should work on overloaded function
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should throw error if parameter Type is incorrect
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should work on function with option output
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should work on function with option parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodByScala should work on overloaded function
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodsByName should work on case constructor parameter
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodsByName should work on function with default parameters
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodsByName should work on lazy val property
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodsByName should work on operator
com.tribbloids.spookystuff.extractors.ScalaReflectionSpike_Generic ‑ getMethodsByName should work on overloaded function
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ ... implicitly
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ Operand from EdgeData
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ Operand from NodeData
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ acyclic detached edge >>> detached edge
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ acyclic node >>> edge >>> node
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ acyclic node >>> node
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ bidirectional node >>> node <<< node
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ cyclic (2 edges >- node ) >>> itself
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ cyclic edge-node >>> itself
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ cyclic node >>> edge >>> same node
com.tribbloids.spookystuff.graph.FlowLayoutSuite ‑ cyclic node >>> itself
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan.JVM is serializable
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan.JVM.batchID is serializable
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan.Task is serializable
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan._id should be updated after being shipped to driver
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan.batchIDs should be updated after being shipped to a different executor
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ Lifespan.batchIDs should be updated after being shipped to a new thread created by a different executor
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ can get all created Cleanables
com.tribbloids.spookystuff.lifespan.CleanableSuite ‑ can get all created Cleanables even their hashcodes may overlap
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object <: class with finalizer
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a cleaner
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a phantom reference cleanup thread
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object registered to a weak reference cleanup thread
com.tribbloids.spookystuff.lifespan.GCCleaningSpike ‑ System.gc() can dispose unreachable object with finalizer
com.tribbloids.spookystuff.lifespan.MinimalCleanerSpike ‑ ..
com.tribbloids.spookystuff.metrics.AccSuite ‑ FromV0
com.tribbloids.spookystuff.metrics.AccSuite ‑ Simple
com.tribbloids.spookystuff.metrics.MetricsSuite ‑ can be converted to JSON
com.tribbloids.spookystuff.metrics.MetricsSuite ‑ tree can be converted to JSON
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ can form linear graph
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ can form loop
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ can form non-linear graph :~>
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ can form non-linear graph <~:
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ can form self-loop
com.tribbloids.spookystuff.parsing.FSMParserDSLSuite ‑ self-loop can union with others
com.tribbloids.spookystuff.parsing.ParsersBenchmark ‑ replace N
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ backtracking unclosed bracket
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ conditional can parse paired brackets
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ linear for 1 rule
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ linear for 4 rules in 2 stages + EOS
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ linear for 4 rules with diamond path
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ loop escape by \
com.tribbloids.spookystuff.parsing.ParsingRunSuite ‑ loop multiple pair brackets
com.tribbloids.spookystuff.python.ref.PyRefSuite ‑ CaseExample can initialize Python instance after constructor parameter has been changed
com.tribbloids.spookystuff.python.ref.PyRefSuite ‑ CaseExample can initialize Python instance with missing constructor parameter
com.tribbloids.spookystuff.python.ref.PyRefSuite ‑ JSONInstanceRef can initialize Python instance after constructor parameter has been changed
com.tribbloids.spookystuff.python.ref.PyRefSuite ‑ JSONInstanceRef can initialize Python instance with missing constructor parameter
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ .map should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ .rdd should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ explore plan can be persisted
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ extract plan can be persisted
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ fetch plan can be persisted
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ flatten plan can be persisted
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ savePage ... on persisted RDD
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ savePage eagerly
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ savePage lazily
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toDF can handle composite types
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toDF can handle simple types
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toDF can yield a DataFrame excluding Fields with .isSelected = false
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toDF(false) should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toDF(true) should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toJSON(false) should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toJSON(true) should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toMapRDD(false) should not run preceding transformation multiple times
com.tribbloids.spookystuff.rdd.FetchedDatasetSuite ‑ toMapRDD(true) should not run preceding transformation multiple times
com.tribbloids.spookystuff.relay.RelayRegistrySuite ‑ lookup can find Relay as companion object
com.tribbloids.spookystuff.relay.RelayRegistrySuite ‑ lookup will throw an exception if companion object is not a Relay
com.tribbloids.spookystuff.relay.RelaySuite ‑ Multipart case class JSON read should be broken
com.tribbloids.spookystuff.relay.RelaySuite ‑ Paranamer constructor lookup
com.tribbloids.spookystuff.relay.RelaySuite ‑ SerializingParam[Function1] should work
com.tribbloids.spookystuff.relay.RelaySuite ‑ can convert even less accurate timestamp
com.tribbloids.spookystuff.relay.RelaySuite ‑ can read generated timestamp
com.tribbloids.spookystuff.relay.RelaySuite ‑ can read less accurate timestamp
com.tribbloids.spookystuff.relay.RelaySuite ‑ can read lossless timestamp
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading a wrapped array with misplaced value & default value should fail early
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an array with misplaced value & default value should fail early
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an array with missing value & default value should fail early
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an object from a converted string should work
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an object with default value should work
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an object with missing value & default value should fail early
com.tribbloids.spookystuff.relay.RelaySuite ‑ reading an object with provided value should work
com.tribbloids.spookystuff.relay.TreeIRSuite ‑ from/to JSON round-trip
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText can print nested case classes
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText can print nested map
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText can print nested seq
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText treeText by AutomaticRelay on seq
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText treeText by AutomaticRelay on seq and map
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText treeText by AutomaticRelay on value
com.tribbloids.spookystuff.relay.io.FormattedTextSuite ‑ FormattedText treeText by Relay
com.tribbloids.spookystuff.row.DataRowSuite ‑ getInt can extract java.lang.Integer type
com.tribbloids.spookystuff.row.DataRowSuite ‑ getInt can extract scala Int type
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntArray can extract from Array
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntArray can extract from Array that has different types
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntArray can extract from Iterator
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntArray can extract from Set
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntIterable can extract from Array
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntIterable can extract from Array that has different types
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntIterable can extract from Iterator
com.tribbloids.spookystuff.row.DataRowSuite ‑ getIntIterable can extract from Set
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTyped can extract java.lang.Integer type
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTyped can extract scala Int type
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTyped should return None if type is incompatible
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTypedArray can extract from Array
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTypedArray can extract from Array that has different types
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTypedArray can extract from Iterator
com.tribbloids.spookystuff.row.DataRowSuite ‑ getTypedArray can extract from Set
com.tribbloids.spookystuff.row.FetchedRowSuite ‑ get page
com.tribbloids.spookystuff.row.FetchedRowSuite ‑ get unstructured
com.tribbloids.spookystuff.row.SquashedRowSuite ‑ ['a 'b 'a 'b].splitByDistinctNames yields ['a 'b] ['a 'b]
com.tribbloids.spookystuff.row.SquashedRowSuite ‑ execution yields at least 1 trajectory
com.tribbloids.spookystuff.spike.SlowRDDSpike ‑ RDD
com.tribbloids.spookystuff.spike.SlowRDDSpike ‑ is repartitioning non-blocking? dataset
com.tribbloids.spookystuff.testutils.BaseSpecSpike ‑ diff in IDE
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ RDDs.batchReduce yield the same results as RDDs.map(_.reduce)
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ RDDs.shufflePartitions can move data into random partitions
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ asArray[Int]
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ asIterable[Int]
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ canonizeUrn should clean ?:$&#
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ withTimeout can execute heartbeat
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ withTimeout can write heartbeat info into log by default
com.tribbloids.spookystuff.testutils.SpookyUtilsSuite ‑ withTimeout won't be affected by scala concurrency global ForkJoin thread pool
com.tribbloids.spookystuff.utils.BacktrackingIteratorSuite ‑ can backtrack for arbitrary times
com.tribbloids.spookystuff.utils.BacktrackingIteratorSuite ‑ can backtrack once
com.tribbloids.spookystuff.utils.EscapeSpike ‑ A
com.tribbloids.spookystuff.utils.PreemptiveLocalOpsSuite ‑ can be much faster than toLocalIterator
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Benchmark: can be much faster than
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Disk Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Disk Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Disk Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Disk Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Serialized 2x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Serialized 2x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Serialized 2x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Checkpointed: Memory Serialized 2x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Disk Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Disk Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Disk Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Disk Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Serialized 2x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Serialized 2x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Serialized 2x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted: Memory Serialized 2x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Disk Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Disk Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Disk Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Disk Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Deserialized 1x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Deserialized 1x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Deserialized 1x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Deserialized 1x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Serialized 2x Replicated RDD with 1 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Serialized 2x Replicated RDD with 3 partition
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Serialized 2x Replicated RDD with many partitions
com.tribbloids.spookystuff.utils.RDDDisperseSuite ‑ Persisted_RDDReified: Memory Serialized 2x Replicated RDD with skewed partitions most of which are empty
com.tribbloids.spookystuff.utils.RangeHashBenchmark ‑ RangeArg hash should be fast
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ Action has a datatype
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ Array[Action] has a datatype
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ Array[Int] has a datatype
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ DocOption has a datatype
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ Int has a datatype
com.tribbloids.spookystuff.utils.ScalaUDTSuite ‑ Unstructured has a datatype
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ ... even if the RDD is not Serializable
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ ... where execution should fail
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ :/ can handle null component
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ Array.filterByType should work on primitive types
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ Seq/Set.filterByType should work on primitive types
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ \\ can handle null component
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ injectPassthroughPartitioner should not move partitions
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ mapOncePerCore
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ mapOncePerWorker
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ multiPassFlatMap should yield same result as flatMap
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ result of allTaskLocationStrs can be used as partition's preferred location
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ runEverywhere
com.tribbloids.spookystuff.utils.SpookyViewsSuite ‑ runEverywhere (alsoOnDriver)
com.tribbloids.spookystuff.utils.classpath.ClasspathResolverSpec ‑ copyResourceToDirectory can extract a dependency's package in a jar
com.tribbloids.spookystuff.utils.classpath.ClasspathResolverSpec ‑ copyResourceToDirectory can extract a package in file system
com.tribbloids.spookystuff.utils.io.HDFSResolverSpike ‑ HDFSResolver can read from FTP server
com.tribbloids.spookystuff.utils.io.HDFSResolverSpike ‑ low level test case
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ ... on executors
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ .toAbsolute is idempotent
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential access ... even for non existing path
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential access to empty directory
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential access to existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential access to non empty directory
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential read and write to existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ Lock can guarantee sequential read and write to non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ can convert absolute path of non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ can convert path with schema of non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ can convert path with schema// of non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ can convert relative path of non-existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ can override login UGI
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ input all accessors can be mutated after creation
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ input can get metadata concurrently
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ move 1 file to different targets should be sequential
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ move 1 file to the same target should be sequential
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ move different files to the same target should be sequential
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ output can automatically create missing directory
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ output can not overwrite over existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ output cannot grant multiple OutputStreams for 1 file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ output copyTo a new file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ output copyTo overwrite an existing file
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ resolver is serializable
com.tribbloids.spookystuff.utils.io.HDFSResolverSuite ‑ touch should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ .toAbsolute is idempotent
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential access ... even for non existing path
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential access to empty directory
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential access to existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential access to non empty directory
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential read and write to existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ Lock can guarantee sequential read and write to non-existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ can convert absolute path of non-existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ can convert relative path of non-existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ input all accessors can be mutated after creation
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ input can get metadata concurrently
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ move 1 file to different targets should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ move 1 file to the same target should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ move different files to the same target should be sequential
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ output can automatically create missing directory
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ output can not overwrite over existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ output cannot grant multiple OutputStreams for 1 file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ output copyTo a new file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ output copyTo overwrite an existing file
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ resolver is serializable
com.tribbloids.spookystuff.utils.io.LocalResolverSuite ‑ touch should be sequential
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially BroadcastLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially BroadcastLocalityImpl.cogroupBase() is always left-outer
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially IndexingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially IndexingLocalityImpl.cogroupBase() is always left-outer
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially SortingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if keys overlap partially SortingLocalityImpl.cogroupBase() is always left-outer
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if the first operand is not serializable BroadcastLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if the first operand is not serializable IndexingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ ... even if the first operand is not serializable SortingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ Spike: when 2 RDDs are cogrouped the first operand containing unserializable objects will not trigger an exceptionif it has a partitioner
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ Spike: when 2 RDDs are cogrouped the first operand will NOT move if it has a partitioner
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ Spike: when 2 RDDs are cogrouped the second operand containing unserializable objects will not trigger an exceptionif it has a partitioner
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ Spike: when 2 RDDs are cogrouped the second operand will NOT move if it has a partitioner
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ cogroupBase() can preserve both locality and in-partition orders BroadcastLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ cogroupBase() can preserve both locality and in-partition orders IndexingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ cogroupBase() can preserve both locality and in-partition orders SortingLocalityImpl
com.tribbloids.spookystuff.utils.locality.LocalityImplSuite ‑ each partition of the first operand of cogroup should not move, but elements are shuffled
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be Serializable with equality -
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be WeaklySerializable - class java.lang.Throwable
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be WeaklySerializable - inner closure of an object that is not serializable Fn by conversion
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be WeaklySerializable - inner closure of an object that is not serializable Fn by single method interface
com.tribbloids.spookystuff.utils.serialization.AssertSerializableSpike ‑ should be WeaklySerializable - inner closure of an object that is not serializable vanilla function
com.tribbloids.spookystuff.utils.serialization.BeforeAndAfterShippingSpec ‑ can serialize container
com.tribbloids.spookystuff.utils.serialization.BeforeAndAfterShippingSpec ‑ can serialize self
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ base class is serializable
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ mxin will trigger a runtime error in closure cleaning
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using JavaSerializer mixin will trigger a runtime error
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using JavaSerializer subclass of a class that inherits mixin will trigger a runtime error
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using JavaSerializer subclass of a trait that inherits mixin will trigger a runtime error
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using KryoSerializer mixin will trigger a runtime error
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using KryoSerializer subclass of a class that inherits mixin will trigger a runtime error
com.tribbloids.spookystuff.utils.serialization.NOTSerializableSpec ‑ when using KryoSerializer subclass of a trait that inherits mixin will trigger a runtime error
org.apache.spark.ml.dsl.AppendSuite ‑ :-> Source is cast to union
org.apache.spark.ml.dsl.AppendSuite ‑ :-> Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite ‑ <-: Source is cast to union
org.apache.spark.ml.dsl.AppendSuite ‑ <-: Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite ‑ A :-> B :-> Source is associative
org.apache.spark.ml.dsl.AppendSuite ‑ A :-> B :-> detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite ‑ A <-: B <-: Source is associative
org.apache.spark.ml.dsl.AppendSuite ‑ A <-: B <-: detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite ‑ can automatically generate names
org.apache.spark.ml.dsl.AppendSuite ‑ pincer topology can be defined by A :-> B <-: A
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ :-> Source is cast to union
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ :-> Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ <-: Source is cast to union
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ <-: Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ A :-> B :-> Source is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ A :-> B :-> detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ A <-: B <-: Source is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ A <-: B <-: detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ can automatically generate names
org.apache.spark.ml.dsl.AppendSuite_PruneDownPath ‑ pincer topology can be defined by A :-> B <-: A
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ :-> Source is cast to union
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ :-> Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ <-: Source is cast to union
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ <-: Stage is cast to rebase
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ A :-> B :-> Source is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ A :-> B :-> detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ A <-: B <-: Source is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ A <-: B <-: detached Stage is associative
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ can automatically generate names
org.apache.spark.ml.dsl.AppendSuite_PruneDownPathKeepRoot ‑ pincer topology can be defined by A :-> B <-: A
org.apache.spark.ml.dsl.CompactionSuite ‑ DoNotCompact should work on Case1  ...
org.apache.spark.ml.dsl.CompactionSuite ‑ DoNotCompact should work on Case2  ...
org.apache.spark.ml.dsl.CompactionSuite ‑ PruneDownPath should work on Case1  ...
org.apache.spark.ml.dsl.CompactionSuite ‑ PruneDownPath should work on Case2  ...
org.apache.spark.ml.dsl.CompactionSuite ‑ PruneDownPathKeepRoot should work on Case1  ...
org.apache.spark.ml.dsl.CompactionSuite ‑ PruneDownPathKeepRoot should work on Case2  ...
org.apache.spark.ml.dsl.ComposeSuite ‑ A compose_> (PASSTHROUGH || Stage) rebase_> B is associative
org.apache.spark.ml.dsl.ComposeSuite ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Flow
org.apache.spark.ml.dsl.ComposeSuite ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Source
org.apache.spark.ml.dsl.ComposeSuite ‑ Compose works when operand2 is type consistent
org.apache.spark.ml.dsl.ComposeSuite ‑ PASSTHROUGH compose_> Stage doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite ‑ Union throws an exception when a stage in result has incompatible number of inputCols
org.apache.spark.ml.dsl.ComposeSuite ‑ Union throws an exception when a stage in result is type inconsistent
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_< Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_< can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_< can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_< can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_< can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> (PASSTHROUGH || Stage) generates 2 heads
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> PASSTHROUGH doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite ‑ compose_> can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite ‑ declare API is equally effective
org.apache.spark.ml.dsl.ComposeSuite ‑ result of compose_< can be the first operand of compose_>
org.apache.spark.ml.dsl.ComposeSuite ‑ result of compose_> can be the first operand of compose_<
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ A compose_> (PASSTHROUGH || Stage) rebase_> B is associative
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Source
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ Compose works when operand2 is type consistent
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ PASSTHROUGH compose_> Stage doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ Union throws an exception when a stage in result has incompatible number of inputCols
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ Union throws an exception when a stage in result is type inconsistent
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_< Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_< can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_< can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_< can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_< can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> (PASSTHROUGH || Stage) generates 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> PASSTHROUGH doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ compose_> can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ declare API is equally effective
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ result of compose_< can be the first operand of compose_>
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPath ‑ result of compose_> can be the first operand of compose_<
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ A compose_> (PASSTHROUGH || Stage) rebase_> B is associative
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ Compose throws an exception when operand2 is type inconsistent with output of operand1 as a Source
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ Compose works when operand2 is type consistent
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ PASSTHROUGH compose_> Stage doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ Union throws an exception when a stage in result has incompatible number of inputCols
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ Union throws an exception when a stage in result is type inconsistent
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_< Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_< can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_< can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_< can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_< can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> (PASSTHROUGH || Stage) generates 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> PASSTHROUGH doesn't change the flow
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> Source doesn't work
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> can append a stage to 2 heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> can append a stage to 2 heads from 1 tail
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> can append a stage to merged heads
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ compose_> can bypass Source of downstream
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ declare API is equally effective
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ result of compose_< can be the first operand of compose_>
org.apache.spark.ml.dsl.ComposeSuite_PruneDownPathKeepRoot ‑ result of compose_> can be the first operand of compose_<
org.apache.spark.ml.dsl.DFDReadWriteSuite ‑ Flow can be serialized into JSON and back
org.apache.spark.ml.dsl.DFDReadWriteSuite ‑ Flow can be serialized into XML and back
org.apache.spark.ml.dsl.DFDReadWriteSuite ‑ Pipeline can be saved and loaded
org.apache.spark.ml.dsl.DFDReadWriteSuite ‑ PipelineModel can be saved and loaded
org.apache.spark.ml.dsl.DFDSuite ‑ Flow can build Pipeline
org.apache.spark.ml.dsl.DFDSuite ‑ Flow can build PipelineModel
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = FailFast_TypeUnsafe, Flow can still build a full pipeline when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = Force, Flow can still build a full pipeline when some of the sources are missing
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = Force, Flow can still build a full pipeline when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = IgnoreIrrelevant, Flow can build a full pipeline given a valid schema evidence
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = IgnoreIrrelevant, Flow can build an incomplete pipeline when some of the sources are missing
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = IgnoreIrrelevant, Flow can build an incomplete pipeline when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ If adaptation = IgnoreIrrelevant_TypeUnsafe, Flow can still build a full pipeline when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ If adaption = FailFast, throw an exception when some of the sources are missing
org.apache.spark.ml.dsl.DFDSuite ‑ If adaption = FailFast, throw an exception when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ If adaption = IgnoreIrrelevant_ValidateSchema, Flow can build an incomplete pipeline when some of the sources are missing
org.apache.spark.ml.dsl.DFDSuite ‑ If adaption = IgnoreIrrelevant_ValidateSchema, throw an exception when some of the sources have inconsistent type
org.apache.spark.ml.dsl.DFDSuite ‑ Pipeline can be visualized as ASCII art
org.apache.spark.ml.dsl.DFDSuite ‑ Pipeline can be visualized as ASCII art backwards
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_< Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_< can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_< can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_< can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_< won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_> Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_> can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_> can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_> can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite ‑ mapHead_> won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_< Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_< can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_< can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_< can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_< won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_> Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_> can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_> can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_> can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPath ‑ mapHead_> won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_< Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_< can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_< can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_< can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_< won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_> Source doesn't work
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_> can append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_> can generate 2 stage replicas and append to 2 heads
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_> can generate 2 stage replicas and append to 2 selected
org.apache.spark.ml.dsl.MapHeadSuite_PruneDownPathKeepRoot ‑ mapHead_> won't remove Source of downstream if it's in tails of both side
org.apache.spark.ml.dsl.SchemaAdaptationSuite ‑ cartesianProduct should work on empty list
org.apache.spark.ml.dsl.SchemaAdaptationSuite ‑ cartesianProduct should work on list
org.apache.spark.ml.dsl.SchemaAdaptationSuite ‑ cartesianProduct should work on list of empty sets
org.apache.spark.ml.dsl.TrieNodeSuite ‑ compact can merge single child parents
org.apache.spark.ml.dsl.TrieNodeSuite ‑ pruneUp can rename single children
org.apache.spark.ml.dsl.TrieNodeSuite ‑ reversed compact can minimize repeated names
org.apache.spark.ml.dsl.TrieNodeSuite ‑ reversed compact can minimize some names
org.apache.spark.ml.dsl.TrieNodeSuite ‑ reversed pruneUp can minimize names
org.apache.spark.ml.dsl.UDFTransformerSuite ‑ transformer can add new column
org.apache.spark.ml.dsl.UDFTransformerSuite ‑ transformer has consistent schema
org.apache.spark.ml.dsl.utils.DSLUtilsSuite ‑ methodName should return caller's name
org.apache.spark.ml.dsl.utils.NullSafetySuite ‑ CannotBeNull can only be converted from Some
org.apache.spark.ml.dsl.utils.NullSafetySuite ‑ String ? Var supports mutation
org.apache.spark.ml.dsl.utils.NullSafetySuite ‑ can be converted from option
org.apache.spark.ml.dsl.utils.NullSafetySuite ‑ can be converted from value
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> Union of arity 3
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> case class with Option[Union] of arity 3

Check notice on line 0 in .github

See this annotation in the file changed.

@github-actions github-actions / Unit Test Results

744 tests found (test 614 to 744)

There are 744 tests, see "Raw output" for the list of tests 614 to 744.
Raw output
org.apache.spark.ml.dsl.utils.RecursiveEitherAsUnionToJSONSpike ‑ JSON <=> case class with Union of arity 3
org.apache.spark.ml.dsl.utils.ScalaNameMixinSuite ‑ can process anonymous function dependent object
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ double to int
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ empty string to Object
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ empty string to Option[Map]
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ empty string to default constructor value
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ int array to int array
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ int to String
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ int to int array
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ int to int seq
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ int to int set
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ missing member to default constructor value
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ sanity test
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ string to int
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ string to int array
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ string to int seq
org.apache.spark.ml.dsl.utils.XMLWeakDeserializerSuite ‑ string to int set
org.apache.spark.ml.dsl.utils.data.EAVSuite ‑ nested <=> JSON
org.apache.spark.ml.dsl.utils.data.EAVSuite ‑ tryGetEnum can convert String to Enumeration
org.apache.spark.ml.dsl.utils.data.EAVSuite ‑ wellformed <=> JSON
org.apache.spark.ml.dsl.utils.data.EAVSuite ‑ withNull <=> JSON
org.apache.spark.ml.dsl.utils.data.Json4sSpike ‑ encode/decode ListMap
org.apache.spark.ml.dsl.utils.data.Json4sSpike ‑ encode/decode Path
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ TypeTag from Type can be serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can create ClassTag for Array[T]
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can get TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can get another TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can reflect anon class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ can reflect lambda
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSpike ‑ scala reflection can be used to get type of Array[String].headOption
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From Array[scala.Tuple2] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From Array[scala.Tuple2] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[(String, Int)] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[(String, Int)] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[(String, Int)] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[(String, Int)]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[(String, Int)]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[(String, Int)]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Double]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Double]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Double]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Int]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Int]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[Int]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[String]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[String]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Array[String]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Double] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Double] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Double] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Int] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Int] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Int] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[(String, Int)]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[(String, Int)]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[(String, Int)]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[String]] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[String]] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[Seq[String]] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[String] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[String] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[String] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart.type] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart.type] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart.type] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.Multipart] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.User] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.User] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[com.tribbloids.spookystuff.relay.TestBeans.User] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[java.sql.Timestamp] ... even if created from raw Type
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[java.sql.Timestamp] has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From TypeTag[java.sql.Timestamp] is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From scala.Tuple2 has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From scala.Tuple2 is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From scala.collection.immutable.Seq has a mirror
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ From scala.collection.immutable.Seq is Serializable
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [(String, Int) <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [(String, Int) <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [(String, Int) <: TypeTag] ==> DataType ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [ArrayType(DoubleType,false) <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [ArrayType(IntegerType,false) <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [ArrayType(StringType,true) <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[(String, Int)] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[(String, Int)] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[(String, Int)] <: TypeTag] ==> DataType ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Double] <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Double] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Double] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Int] <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Int] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[Int] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[String] <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[String] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Array[String] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Double <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Double <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Double <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [DoubleType <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Int <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Int <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Int <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [IntegerType <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[(String, Int)] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[(String, Int)] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[(String, Int)] <: TypeTag] ==> DataType ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[String] <: TypeTag] ==> Class ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[String] <: TypeTag] ==> ClassTag ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [Seq[String] <: TypeTag] ==> DataType ==> ?
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [String <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [String <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [String <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [StringType <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [TimestampType <: DataType] <==> TypeTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.Multipart <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.Multipart <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.Multipart.type <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.Multipart.type <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.User <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [com.tribbloids.spookystuff.relay.TestBeans.User <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [java.sql.Timestamp <: TypeTag] <==> Class
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [java.sql.Timestamp <: TypeTag] <==> ClassTag
org.apache.spark.ml.dsl.utils.refl.TypeMagnetSuite ‑ [java.sql.Timestamp <: TypeTag] <==> DataType
org.apache.spark.ml.dsl.utils.refl.TypeSpike ‑ List type equality
org.apache.spark.ml.dsl.utils.refl.TypeSpike ‑ Map type equality
org.apache.spark.ml.dsl.utils.refl.UnReifiedObjectTypeSuite ‑ toString
org.apache.spark.rdd.spookystuff.FallbackIteratorSuite ‑ can consume from 1 iterator
org.apache.spark.rdd.spookystuff.FallbackIteratorSuite ‑ can consume from 2 iterators
org.apache.spark.rdd.spookystuff.ScalaTestJUnitRunnerSpike$$anon$1 ‑ test 1
org.apache.spark.rdd.spookystuff.ScalaTestJUnitRunnerSpike$$anon$2 ‑ test 1