Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem loading SNPeff annotated VCF #1390

ddemaeyer opened this issue Feb 14, 2017 · 3 comments

Problem loading SNPeff annotated VCF #1390

ddemaeyer opened this issue Feb 14, 2017 · 3 comments


Copy link

@ddemaeyer ddemaeyer commented Feb 14, 2017

Trying to load a SNPEff annotated VCF into ADAM 0.21.0 using Scala 2.10. Tried loading of VCFs in ADAM 0.19.0 and this works (also using Kryo-2.24). Getting the Kryo exception below:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6,
at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:232)
at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:217)
at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:178)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1220)
at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:165)
at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:88)
at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.executor.Executor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$

Copy link

@fnothaft fnothaft commented Feb 18, 2017

Hi @ddemaeyer! Sorry for the slow reply, a few of us have been out-of-the-office sporadically this week. Which version of Spark are you running, and how are you deploying Spark (e.g., in Mesos, on YARN, etc). Is this the only error you are seeing? Sometimes serialization will fail and there will be two errors thrown: one where the serializer failed, and one where the deserialization failed (that would be this error message).

Copy link

@heuermh heuermh commented Apr 19, 2017

Ping @ddemaeyer. Have you tried reading the same file with ADAM version 0.22.0 or git HEAD?

Copy link

@fnothaft fnothaft commented May 12, 2017

Ping @ddemaeyer. Are you still seeing this issue?

@fnothaft fnothaft closed this Jun 22, 2017
@heuermh heuermh modified the milestone: 0.23.0 Jul 22, 2017
@heuermh heuermh added this to Completed in Release 0.23.0 Jan 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants
You can’t perform that action at this time.