Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD #674

Open
ghost opened this issue Jun 29, 2017 · 19 comments
Assignees

Comments

@ghost
Copy link

ghost commented Jun 29, 2017

ClassCastException throwed when I run mapPartitions() again.

environment :

snappydata 0.9

4 machine:
node1: locator
node2: locator
node3: server
node4: server, lead

config:
spark.closure.serializer=org.apache.spark.serializer.JavaSerializer
spark.serializer=org.apache.spark.serializer.JavaSerializer

description :

my code:

object SnappyJobSelectAndSearchSimpleToGithub extends SnappySQLJob {

    override def isValidJob(sc: SnappySession, config: Config): SnappyJobValidation = {
        SnappyJobValid()
    }

    override def runSnappyJob(snSession: SnappySession, conf: Config): Any = {
        val table = "app.test_table_60m_from_text_03"
        import snSession.sqlContext.implicits._
        val rs = snSession.sql("select * from " + table)
        val topKs2 = rs.as[ImageRow].mapPartitions(p => {
            p.map(r => {
                println(r)
                r.uuid
            })
        }).first()

            ...
    }
}

I post a new snappyjob jar to snappy-jobserver ( by REST api ), and ran it the first time , it was fine.

create context:
POST node76:8090/contexts/context07?contextFactory=org.apache.spark.sql.SnappySessionFactory&dependent-jar-uris=/path/to/jars

log:

17/06/28 15:00:37.853 CST Executor task launch worker-2<tid=0x9a> INFO SnappyExecutor: Fetching spark://172.20.2.76:60963/jars/jar_niko_48-2017-06-28T15_00_25.148+08_00.jar with timestamp 1498633237666
17/06/28 15:00:37.889 CST Executor task launch worker-2<tid=0x9a> INFO TransportClientFactory: Found inactive connection to /172.20.2.76:60963, creating a new one.
17/06/28 15:00:37.894 CST Executor task launch worker-2<tid=0x9a> INFO TransportClientFactory: Successfully created connection to /172.20.2.76:60963 after 4 ms (0 ms spent in bootstraps)
17/06/28 15:00:37.894 CST Executor task launch worker-2<tid=0x9a> INFO Utils: Fetching spark://172.20.2.76:60963/jars/jar_niko_48-2017-06-28T15_00_25.148+08_00.jar to /tmp/spark-637bf360-bf37-42eb-9c7a-4b5da1eea954/fetchFileTemp4994428655346397391.tmp
17/06/28 15:00:37.916 CST Executor task launch worker-2<tid=0x9a> INFO Utils: Copying /tmp/spark-637bf360-bf37-42eb-9c7a-4b5da1eea954/9963592201498633237666_cache to /foo/foo2/snappydata/snappydata-0.9-data/server/./jar_niko_48-2017-06-28T15_00_25.148+08_00.jar
17/06/28 15:00:37.921 CST Executor task launch worker-2<tid=0x9a> INFO SnappyExecutor: Adding file:/foo/foo2/snappydata/snappydata-0.9-data/server/./jar_niko_48-2017-06-28T15_00_25.148+08_00.jar to class loader

But when I ran the job again, java.lang.ClassCastException was throwed, and SnappyExecutor said As some of the Jars have been deleted, setting up a new ClassLoader for subsequent Threads. ( Maybe it is related to ClassLoader, I guess. )

snappy-server log:

17/06/28 15:03:48.265 CST Executor task launch worker-3<tid=0x9a> INFO SnappyMutableURLClassLoader: Removing jar_niko_48-2017-06-28T15_00_25.148+08_00.jar from Spark root directory
17/06/28 15:03:48.267 CST Executor task launch worker-3<tid=0x9a> INFO SnappyExecutor: As some of the Jars have been deleted, setting up a new ClassLoader for subsequent Threads
17/06/28 15:03:48.267 CST Executor task launch worker-3<tid=0x9a> INFO SnappyExecutor: removed jar jar_niko_48-2017-06-28T15_00_25.148+08_00.jar
17/06/28 15:03:48.277 CST Executor task launch worker-3<tid=0x9a> ERROR SnappyExecutor: Exception in task 0.0 in stage 5.0 (TID 14)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apa
che.spark.rdd.MapPartitionsRDD
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
        at org.apache.spark.scheduler.Task.run(Task.scala:107)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
17/06/28 15:03:48.292 CST Executor task launch worker-3<tid=0x9a> ERROR SnappyExecutor: Exception in task 0.1 in stage 5.0 (TID 15)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apa
che.spark.rdd.MapPartitionsRDD
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
        at org.apache.spark.scheduler.Task.run(Task.scala:107)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
17/06/28 15:03:48.301 CST Executor task launch worker-3<tid=0x9a> ERROR SnappyExecutor: Exception in task 0.2 in stage 5.0 (TID 16)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apa
che.spark.rdd.MapPartitionsRDD
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
        at org.apache.spark.scheduler.Task.run(Task.scala:107)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)



snappy-leader log:

17/06/28 15:03:48.321 CST pool-23-thread-5<tid=0x19> ERROR JobManagerActor: Exception from job 173b83cc-98e0-476a-ab34-fe24eecc6a5b:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 17, node75.st): java.lang.ClassCastException: cannot assign instance of scal
a.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
        at org.apache.spark.scheduler.Task.run(Task.scala:107)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1469)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1457)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1456)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1456)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:809)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1682)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1637)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1626)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1899)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
        at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
        at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
        at org.apache.spark.sql.execution.CodegenSparkFallback$$anonfun$executeCollect$1.apply(CodegenSparkFallback.scala:97)
        at org.apache.spark.sql.execution.CodegenSparkFallback$$anonfun$executeCollect$1.apply(CodegenSparkFallback.scala:97)
        at org.apache.spark.sql.execution.CodegenSparkFallback.executeWithFallback(CodegenSparkFallback.scala:38)
        at org.apache.spark.sql.execution.CodegenSparkFallback.executeCollect(CodegenSparkFallback.scala:97)
        at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
        at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2199)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1935)
        at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1934)
        at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2576)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:1934)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:1943)
        at org.apache.spark.sql.Dataset.first(Dataset.scala:1950)
        at search.SnappyJobSelectAndSearchSimple05$.foo(SnappyJobSelectAndSearchSimple05.scala:67)
        at search.SnappyJobSelectAndSearchSimple05$.runSnappyJob(SnappyJobSelectAndSearchSimple05.scala:89)
        at org.apache.spark.sql.SnappySQLJob$class.runJob(SnappySessionFactory.scala:75)
        at search.SnappyJobSelectAndSearchSimple05$.runJob(SnappyJobSelectAndSearchSimple05.scala:28)
        at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:327)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
        at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
        at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
        at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
        at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
        at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
        at org.apache.spark.scheduler.Task.run(Task.scala:107)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
        ... 3 more
@rishitesh
Copy link
Contributor

rishitesh commented Jun 29, 2017

Have you tried with default seraializers i.e. without specifying JavaSerializer in config ? I will be checking with JavaSerializer and revert back.

@rishitesh rishitesh self-assigned this Jun 29, 2017
@jeromeheng
Copy link

@rishitesh
We used default seraializers(PooledKryoSerializer) , but it's worse , at first time it was being error , and it don't find class , closure inner class and so on,and thowable " java.lang.ClassNotFoundException: search.SnappyJobSelectAndSearch02$$anonfun$2 " .
Then we choise a new idea : change the default . spark.closure.serializer=org.apache.spark.serializer.JavaSerializer

@jeromeheng
Copy link

jeromeheng commented Jun 29, 2017

By the way , the Exception just like this :
log
**
17/06/26 21:55:43.228 CST pool-23-thread-2<tid=0xdf> ERROR JobManagerActor: Exception from job 092b3105-ee99-46bc-a7bc-b41edd3f1347:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 16 in stage 0.0 failed 4 times, most recent failure: Lost task 16.3 in stage 0.0 (TID 41, node76.st): com.esotericsoftware.kryo.KryoException: Unable to find class: search.SnappyJobSelectAndSearch02$$anonfun$2
Serialization trace:
func (org.apache.spark.sql.execution.MapPartitionsExec)
$outer (org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6)
f$22 (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24)
f (org.apache.spark.rdd.MapPartitionsRDD)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24)
f (org.apache.spark.rdd.MapPartitionsRDD)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:160)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:782)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.read(WholeStageCodegenExec.scala:581)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$KryoSerializableSerializer.read(DefaultSerializers.java:514)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$KryoSerializableSerializer.read(DefaultSerializers.java:506)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:782)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:41)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at org.apache.spark.serializer.PooledKryoSerializerInstance.deserialize(PooledKryoSerializer.scala:291)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
at org.apache.spark.scheduler.Task.run(Task.scala:107)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: search.SnappyJobSelectAndSearch02$$anonfun$2
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadClassFromJobJar(SnappyExecutor.scala:156)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3$$anonfun$apply$4.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3$$anonfun$apply$4.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadJar(SnappyExecutor.scala:145)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3.apply(SnappyExecutor.scala:140)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2.apply(SnappyExecutor.scala:140)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadClass(SnappyExecutor.scala:139)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 40 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1469)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1456)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1456)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:809)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:809)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1682)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1637)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1626)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1899)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1939)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:913)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:359)
at org.apache.spark.rdd.RDD.collect(RDD.scala:912)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:290)
at org.apache.spark.sql.execution.CodegenSparkFallback$$anonfun$executeCollect$1.apply(CodegenSparkFallback.scala:97)
at org.apache.spark.sql.execution.CodegenSparkFallback$$anonfun$executeCollect$1.apply(CodegenSparkFallback.scala:97)
at org.apache.spark.sql.execution.CodegenSparkFallback.executeWithFallback(CodegenSparkFallback.scala:38)
at org.apache.spark.sql.execution.CodegenSparkFallback.executeCollect(CodegenSparkFallback.scala:97)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2193)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2546)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2192)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2197)
at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$collect$1.apply(Dataset.scala:2197)
at org.apache.spark.sql.Dataset.withCallback(Dataset.scala:2559)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2197)
at org.apache.spark.sql.Dataset.collect(Dataset.scala:2173)
at search.SnappyJobSelectAndSearch02$.foo(SnappyJobSelectAndSearch02.scala:102)
at search.SnappyJobSelectAndSearch02$.runSnappyJob(SnappyJobSelectAndSearch02.scala:133)
at org.apache.spark.sql.SnappySQLJob$class.runJob(SnappySessionFactory.scala:75)
at search.SnappyJobSelectAndSearch02$.runJob(SnappyJobSelectAndSearch02.scala:28)
at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:327)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.esotericsoftware.kryo.KryoException: Unable to find class: search.SnappyJobSelectAndSearch02$$anonfun$2
Serialization trace:
func (org.apache.spark.sql.execution.MapPartitionsExec)
$outer (org.apache.spark.sql.execution.MapPartitionsExec$$anonfun$6)
f$22 (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24)
f (org.apache.spark.rdd.MapPartitionsRDD)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1)
$outer (org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24)
f (org.apache.spark.rdd.MapPartitionsRDD)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:160)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:693)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:782)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at org.apache.spark.sql.execution.WholeStageCodegenRDD.read(WholeStageCodegenExec.scala:581)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$KryoSerializableSerializer.read(DefaultSerializers.java:514)
at com.esotericsoftware.kryo.serializers.DefaultSerializers$KryoSerializableSerializer.read(DefaultSerializers.java:506)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:782)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:731)
at com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:540)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:41)
at com.twitter.chill.Tuple2Serializer.read(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:813)
at org.apache.spark.serializer.PooledKryoSerializerInstance.deserialize(PooledKryoSerializer.scala:291)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:75)
at org.apache.spark.scheduler.Task.run(Task.scala:107)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:277)
... 3 more
Caused by: java.lang.ClassNotFoundException: search.SnappyJobSelectAndSearch02$$anonfun$2
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadClassFromJobJar(SnappyExecutor.scala:156)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3$$anonfun$apply$4.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3$$anonfun$apply$4.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadJar(SnappyExecutor.scala:145)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2$$anonfun$apply$3.apply(SnappyExecutor.scala:140)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2.apply(SnappyExecutor.scala:140)
at org.apache.spark.executor.SnappyMutableURLClassLoader$$anonfun$loadClass$2.apply(SnappyExecutor.scala:140)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.executor.SnappyMutableURLClassLoader.loadClass(SnappyExecutor.scala:139)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 40 more
**

@rishitesh
Copy link
Contributor

Each time we submit a job, the previous job jar is removed from classpath, so that users can safely put some new changes. Hence every time we have to submit the job with the job jar. This is kind of limitation. Please let me know how you are submitting the job second time ?

@jeromeheng
Copy link

we run a same job again , and at a second time the job will be throw exception。

@rishitesh
Copy link
Contributor

Ok. Will check. I am assuming you provide the --app-jar option 2nd time as well.

@jeromeheng
Copy link

jeromeheng commented Jun 29, 2017

@rishitesh
Thanks for your support , I'm waiting for your message : )’

@rishitesh
Copy link
Contributor

@jeromeheng Sorry, I missed sentence where you have mentioned you are submitting jobs by rest APIs. We have not tested much with REST api way of submitting jobs. We have wrapped this mechanism with snappy-job command. e,g,

$SNAPPY-HOME/bin/snappy-job.sh submit --app-name CreatePartitionedRowTable --class org.apache.spark.examples.snappydata.CreatePartitionedRowTable --app-jar examples/jars/quickstart.jar --lead hostNameOfLead:8090

You should use this command to submit the jobs. Please see http://snappydatainc.github.io/snappydata/programming_guide/#submitting-jobs for details.

Let me know if I misunderstood anything.

@ghost
Copy link
Author

ghost commented Jun 29, 2017

Without the dependent-jar-uris support of jobserver, I must create an assembly jar. But the fat jar is too large (almost 50 MB), which make the jobserver doing gc frequently and running slowly (tested on jobserver 0.7). So I need dependent-jar-uris support of jobserver.
Do you have any advice for this scenario instead of making a fat jar on snappydata ?
Thanks.

@rishitesh
Copy link
Contributor

@niko2014 You can use our inbuilt install jar commands to load dependet jars to Snappy servers.
The job jar then can only contain jobs logic.
For details check http://snappydatainc.github.io/snappydata/programming_guide/#managing-jar-files

@ghost
Copy link
Author

ghost commented Jul 3, 2017

@rishitesh
I put some dependent jars to $SNAPPY_HOME/jars directory, and the job worked fine.
But we need to connect to a hbase-1.1.2 cluster in our snappyjob now, while snappydata-0.9 is using hbase-0.98.17.
Will there be a conflict if install hbase 1.1.2 jars to snappydata with SQLJ.INSTALL_JAR( ) ? Does snappydata classloader support that ?
Thanks.

@rishitesh
Copy link
Contributor

@niko2014 Yes, that might cause a problem as the root classloader will have hbase classes from hbase-0.98.17.

@sumwale
Copy link
Contributor

sumwale commented Jul 3, 2017

@niko2014 I think it should work so try it out. GemFire layer uses hbase only for custom HDFS overflow which is not available from SnappyData so those classes should never get loaded otherwise.

The requirement to provide "dependent-jar-uris" and equivalent of "--package" option is being tracked here https://jira.snappydata.io/browse/SNAP-1529
@rishitesh

@rishitesh
Copy link
Contributor

@niko2014 yes SNAP-1529 is work in progress and not available in 0.9.
Regarding HBase, yes its worth a try.

@ghost
Copy link
Author

ghost commented Jul 3, 2017

@rishitesh

I installed hbase-client-1.1.2 jar with call sqlj.install_jar('/path/hbase-client-1.1.2.jar', 'APP.custom_procs_hbase_client', 0);. But java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/ConnectionFactory was throwed when running the snappy job. Am I installing the jar wrong ?

btw: org/apache/hadoop/hbase/client/ConnectionFactory only exists above hbase 0.99, org.apache.hadoop.hbase.util.Writables exists in hbase 0.98

I printed the Writables class info with println("classOf[Writables] => " + classOf[Writables]). And the log classOf[Writables] => class org.apache.hadoop.hbase.util.Writables shows the hbase-client-0.98.17-hadoop2.jar was loaded, while org/apache/hadoop/hbase/client/ConnectionFactory in hbase-client-1.1.2 cannot be found.

stacktrace :

17/07/03 16:36:36.590 HKT SnappyLeadJobServer-akka.actor.default-dispatcher-18<tid=0xf> INFO RemoteActorRefProvider$RemoteDeadLetterActorRef: Message [spark.jobserver.JobInfoActor$JobConfigStored$] from Actor[akka://SnappyLeadJobServer/user/job-info#-1542592448] to Actor[akka://SnappyLeadJobServer/deadLetters] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

classOf[Writables] => class org.apache.hadoop.hbase.util.Writables

17/07/03 16:36:36.593 HKT pool-24-thread-1<tid=0xf> INFO SparkContext: Removing jar spark://172.20.12.249:42449/jars/test126-2017-07-03T16_36_36.553+08_00.jar from SparkContext list
17/07/03 16:36:36.593 HKT pool-24-thread-1<tid=0xf> ERROR JobManagerActor: Got Fatal Exception:
java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/ConnectionFactory
        at deps.hbase.HBaseConnectionManagerJ.getConnFromPool(HBaseConnectionManagerJ.java:26)
        at deps.hbase.FeatureStoreService$.getFeatureBase64Bytes(FeatureStoreService.scala:19)
        at search26.local.SnappyJobHbaseSimpleTest$.foo(SnappyJobHbaseSimpleTest.scala:71)
        at search26.local.SnappyJobHbaseSimpleTest$.runSnappyJob(SnappyJobHbaseSimpleTest.scala:78)
        at org.apache.spark.sql.SnappySQLJob$class.runJob(SnappySessionFactory.scala:75)
        at search26.local.SnappyJobHbaseSimpleTest$.runJob(SnappyJobHbaseSimpleTest.scala:29)
        at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:327)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
17/07/03 16:36:36.594 HKT pool-24-thread-2<tid=0xf> ERROR JobManagerActor: Exception from job 6adf2f40-19af-4cb8-8d4d-8ab0b54a3e8f:
java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/client/ConnectionFactory
        at deps.hbase.HBaseConnectionManagerJ.getConnFromPool(HBaseConnectionManagerJ.java:26)
        at deps.hbase.FeatureStoreService$.getFeatureBase64Bytes(FeatureStoreService.scala:19)
        at search26.local.SnappyJobHbaseSimpleTest$.foo(SnappyJobHbaseSimpleTest.scala:71)
        at search26.local.SnappyJobHbaseSimpleTest$.runSnappyJob(SnappyJobHbaseSimpleTest.scala:78)
        at org.apache.spark.sql.SnappySQLJob$class.runJob(SnappySessionFactory.scala:75)
        at search26.local.SnappyJobHbaseSimpleTest$.runJob(SnappyJobHbaseSimpleTest.scala:29)
        at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:327)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
17/07/03 16:36:36.594 HKT SnappyLeadJobServer-akka.actor.default-dispatcher-18<tid=0xf> INFO JobStatusActor: Job 6adf2f40-19af-4cb8-8d4d-8ab0b54a3e8f finished with an error

@sumwale
Copy link
Contributor

sumwale commented Jul 3, 2017

@niko2014 Yeah, install-jar with a conflicting product jar will not work. Try putting it in jars directory and removing the existing version.

@jeromeheng
Copy link

jeromeheng commented Jul 3, 2017

@sumwale
Could you provide us a patch as a temporary solution ? If not, how long will you hava a release version for this issue ?
Thx : )

@sumwale
Copy link
Contributor

sumwale commented Sep 19, 2017

@rishitesh What's the plan to address such issues? Shouldn't we remove hbase jars from product jars since it is not supported in SnappyData in any case (and only rowstore builds should have it). @ashetkar @kneeraj

@pradeepcheers
Copy link

Facing the same error unfortunately. Couldn't find any solution to fix this issue so far!! :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants