broadinstitute / picard Public
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.OutOfMemoryError: Java heap space #1221
Comments
|
Try lowering the records in RAM argument
…On Tue, Aug 28, 2018 at 10:34 PM renny817 ***@***.***> wrote:
Bug Report Affected tool(s)
Mark Duplicates
Affected version(s)
Picard version: 2.18.11
Using Java 8u151
Description
The error I keep getting is:
INFO 2018-08-29 11:58:12 MarkDuplicates Tracking 2186445 as yet unmatched
pairs. 117336 records in RAM.
INFO 2018-08-29 11:58:21 MarkDuplicates Read 489,000,000 records. Elapsed
time: 00:58:09s. Time for last 1,000,000: 8s. Last read position:
chr8:15,392,138
INFO 2018-08-29 11:58:21 MarkDuplicates Tracking 2186162 as yet unmatched
pairs. 114912 records in RAM.
[Wed Aug 29 11:58:39 AEST 2018] picard.sam.markduplicates.MarkDuplicates
done. Elapsed time: 58.48 minutes.
Runtime.totalMemory()=954728448
To get help, see
http://broadinstitute.github.io/picard/index.html#GettingHelp
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.reflect.Array.newArray(Native Method)
at java.lang.reflect.Array.newInstance(Array.java:75)
at java.util.Arrays.parallelSort(Arrays.java:1178)
at
htsjdk.samtools.util.SortingCollection.spillToDisk(SortingCollection.java:248)
at htsjdk.samtools.util.SortingCollection.add(SortingCollection.java:183)
at
picard.sam.markduplicates.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:590)
at picard.sam.markduplicates.MarkDuplicates.doWork(MarkDuplicates.java:232)
at
picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:277)
at
picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:103)
Steps to reproduce
I am using the command:
java -Xmx60g -jar picard.jar MarkDuplicates
INPUT=$BAM_DIR$TUMOR$SORTED
OUTPUT=$BAM_DIR$TUMOR$SUFFIX
METRICS_FILE=$BAM_DIR$TUMOR$SUFFIX.metrics.txt
VALIDATION_STRINGENCY=LENIENT
TMP_DIR=$TMP_DIR
Expected behavior
I ran this exact bam file through 1 month ago with 8gb, same parameters
and no problems.
Actual behavior
As above
Thank you for you help!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1221>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACnk0o-IFXA6HwP3uBvwbes4Vcv2QMQzks5uVf2jgaJpZM4WQt_X>
.
|
|
Done, thank you! |
|
Hi, |
|
YONG, you can uses the java arg -Xmx2g to increase the amount of memory. I use -Xmx100g on a system with 1TB RAM. Make sure you know how much memory your system has available. |
This issue was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Bug Report
Affected tool(s)
Mark Duplicates
Affected version(s)
Picard version: 2.18.11
Using Java 8u151
Description
The error I keep getting is:
INFO 2018-08-29 11:58:12 MarkDuplicates Tracking 2186445 as yet unmatched pairs. 117336 records in RAM.
INFO 2018-08-29 11:58:21 MarkDuplicates Read 489,000,000 records. Elapsed time: 00:58:09s. Time for last 1,000,000: 8s. Last read position: chr8:15,392,138
INFO 2018-08-29 11:58:21 MarkDuplicates Tracking 2186162 as yet unmatched pairs. 114912 records in RAM.
[Wed Aug 29 11:58:39 AEST 2018] picard.sam.markduplicates.MarkDuplicates done. Elapsed time: 58.48 minutes.
Runtime.totalMemory()=954728448
To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.reflect.Array.newArray(Native Method)
at java.lang.reflect.Array.newInstance(Array.java:75)
at java.util.Arrays.parallelSort(Arrays.java:1178)
at htsjdk.samtools.util.SortingCollection.spillToDisk(SortingCollection.java:248)
at htsjdk.samtools.util.SortingCollection.add(SortingCollection.java:183)
at picard.sam.markduplicates.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:590)
at picard.sam.markduplicates.MarkDuplicates.doWork(MarkDuplicates.java:232)
at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:277)
at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:103)
Steps to reproduce
I am using the command:
java -Xmx60g -jar picard.jar MarkDuplicates
INPUT=$BAM_DIR$TUMOR$SORTED
OUTPUT=$BAM_DIR$TUMOR$SUFFIX
METRICS_FILE=$BAM_DIR$TUMOR$SUFFIX.metrics.txt
VALIDATION_STRINGENCY=LENIENT
TMP_DIR=$TMP_DIR
Expected behavior
I ran this exact bam file through 1 month ago with 8gb, same parameters and no problems.
Actual behavior
As above
Thank you for you help!
The text was updated successfully, but these errors were encountered: