New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADAM to BAM conversion failing on 1000G file #1013

Closed
jpdna opened this Issue Apr 24, 2016 · 2 comments

Comments

Projects
None yet
3 participants
@jpdna
Member

jpdna commented Apr 24, 2016

Starting with file:

http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/technical/pilot2_high_cov_GRCh37_bams/data/NA12878/alignment/NA12878.chrom22.ILLUMINA.bwa.CEU.high_coverage.20100311.bam

I able able to convert to adam format succesfully using adam-submit transform

However, when running adam-submit transform again to produce a BAM file from that ADAM file like:

../adam/bin/adam-submit --executor-cores=6 --total-executor-cores=24 --driver-cores=4 --executor-memory=8g --driver-memory=4g --  transform /home/jp/Berk/work/issues/check_BAM_creation/adamtobamtest/adam4/NA12878.chr22.adam /home/jp/Berk/work/issues/check_BAM_creation/adamtobamtest/adamtobam4/outbam.bam -sort_reads -single

I get the error below:

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1914)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1055)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:938)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
    at org.apache.spark.rdd.InstrumentedPairRDDFunctions.saveAsNewAPIHadoopFile(InstrumentedPairRDDFunctions.scala:487)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions$$anonfun$saveAsSam$1.apply(AlignmentRecordRDDFunctions.scala:417)
    at scala.Option.fold(Option.scala:157)
    at org.apache.spark.rdd.Timer.time(Timer.scala:48)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions.saveAsSam(AlignmentRecordRDDFunctions.scala:273)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions.maybeSaveBam(AlignmentRecordRDDFunctions.scala:93)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions.save(AlignmentRecordRDDFunctions.scala:217)
    at org.bdgenomics.adam.cli.Transform.run(Transform.scala:344)
    at org.bdgenomics.utils.cli.BDGSparkCommand$class.run(BDGCommand.scala:54)
    at org.bdgenomics.adam.cli.Transform.run(Transform.scala:119)
    at org.bdgenomics.adam.cli.ADAMMain.apply(ADAMMain.scala:133)
    at org.bdgenomics.adam.cli.ADAMMain$.main(ADAMMain.scala:73)
    at org.bdgenomics.adam.cli.ADAMMain.main(ADAMMain.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: scala.MatchError: Z:I, (of class java.lang.String)
    at org.bdgenomics.adam.util.AttributeUtils$.createAttribute(AttributeUtils.scala:92)
    at org.bdgenomics.adam.util.AttributeUtils$.parseAttribute(AttributeUtils.scala:74)
    at org.bdgenomics.adam.util.AttributeUtils$$anonfun$parseAttributes$2.apply(AttributeUtils.scala:61)
    at org.bdgenomics.adam.util.AttributeUtils$$anonfun$parseAttributes$2.apply(AttributeUtils.scala:61)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
    at org.bdgenomics.adam.util.AttributeUtils$.parseAttributes(AttributeUtils.scala:61)
    at org.bdgenomics.adam.rich.RichAlignmentRecord.tags$lzycompute(RichAlignmentRecord.scala:79)
    at org.bdgenomics.adam.rich.RichAlignmentRecord.tags(RichAlignmentRecord.scala:79)
    at org.bdgenomics.adam.converters.AlignmentRecordConverter$$anonfun$convert$1.apply(AlignmentRecordConverter.scala:188)
    at org.bdgenomics.adam.converters.AlignmentRecordConverter$$anonfun$convert$1.apply(AlignmentRecordConverter.scala:98)
    at scala.Option.fold(Option.scala:157)
    at org.apache.spark.rdd.Timer.time(Timer.scala:48)
    at org.bdgenomics.adam.converters.AlignmentRecordConverter.convert(AlignmentRecordConverter.scala:98)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions$$anonfun$convertToSam$1$$anonfun$8.apply(AlignmentRecordRDDFunctions.scala:590)
    at org.bdgenomics.adam.rdd.read.AlignmentRecordRDDFunctions$$anonfun$convertToSam$1$$anonfun$8.apply(AlignmentRecordRDDFunctions.scala:587)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1035)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1034)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1034)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1206)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1042)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

@heuermh

This comment has been minimized.

Show comment
Hide comment
@heuermh

heuermh Jul 6, 2016

Member

Could you update this issue with enough details to reproduce? I assume you were running with git HEAD at the time?

Member

heuermh commented Jul 6, 2016

Could you update this issue with enough details to reproduce? I assume you were running with git HEAD at the time?

@fnothaft

This comment has been minimized.

Show comment
Hide comment
@fnothaft

fnothaft Jul 17, 2016

Member

This is a duplicate of #1061. Closing. I'll have a PR for #1061 today.

Member

fnothaft commented Jul 17, 2016

This is a duplicate of #1061. Closing. I'll have a PR for #1061 today.

@fnothaft fnothaft closed this Jul 17, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment