New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD #674
Comments
Have you tried with default seraializers i.e. without specifying JavaSerializer in config ? I will be checking with JavaSerializer and revert back. |
@rishitesh |
By the way , the Exception just like this : Driver stacktrace: |
Each time we submit a job, the previous job jar is removed from classpath, so that users can safely put some new changes. Hence every time we have to submit the job with the job jar. This is kind of limitation. Please let me know how you are submitting the job second time ? |
we run a same job again , and at a second time the job will be throw exception。 |
Ok. Will check. I am assuming you provide the --app-jar option 2nd time as well. |
@rishitesh |
@jeromeheng Sorry, I missed sentence where you have mentioned you are submitting jobs by rest APIs. We have not tested much with REST api way of submitting jobs. We have wrapped this mechanism with snappy-job command. e,g, $SNAPPY-HOME/bin/snappy-job.sh submit --app-name CreatePartitionedRowTable --class org.apache.spark.examples.snappydata.CreatePartitionedRowTable --app-jar examples/jars/quickstart.jar --lead hostNameOfLead:8090 You should use this command to submit the jobs. Please see http://snappydatainc.github.io/snappydata/programming_guide/#submitting-jobs for details. Let me know if I misunderstood anything. |
Without the |
@niko2014 You can use our inbuilt install jar commands to load dependet jars to Snappy servers. |
@rishitesh |
@niko2014 Yes, that might cause a problem as the root classloader will have hbase classes from hbase-0.98.17. |
@niko2014 I think it should work so try it out. GemFire layer uses hbase only for custom HDFS overflow which is not available from SnappyData so those classes should never get loaded otherwise. The requirement to provide "dependent-jar-uris" and equivalent of "--package" option is being tracked here https://jira.snappydata.io/browse/SNAP-1529 |
@niko2014 yes SNAP-1529 is work in progress and not available in 0.9. |
I installed hbase-client-1.1.2 jar with btw: I printed the stacktrace :
|
@niko2014 Yeah, install-jar with a conflicting product jar will not work. Try putting it in jars directory and removing the existing version. |
@sumwale |
@rishitesh What's the plan to address such issues? Shouldn't we remove hbase jars from product jars since it is not supported in SnappyData in any case (and only rowstore builds should have it). @ashetkar @kneeraj |
Facing the same error unfortunately. Couldn't find any solution to fix this issue so far!! :( |
ClassCastException throwed when I run mapPartitions() again.
environment :
snappydata 0.9
4 machine:
node1: locator
node2: locator
node3: server
node4: server, lead
config:
spark.closure.serializer=org.apache.spark.serializer.JavaSerializer
spark.serializer=org.apache.spark.serializer.JavaSerializer
description :
my code:
I post a new snappyjob jar to snappy-jobserver ( by REST api ), and ran it the first time , it was fine.
create context:
POST node76:8090/contexts/context07?contextFactory=org.apache.spark.sql.SnappySessionFactory&dependent-jar-uris=/path/to/jars
log:
But when I ran the job again, java.lang.ClassCastException was throwed, and SnappyExecutor said
As some of the Jars have been deleted, setting up a new ClassLoader for subsequent Threads
. ( Maybe it is related to ClassLoader, I guess. )snappy-server log:
snappy-leader log:
The text was updated successfully, but these errors were encountered: