-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
splitNCigar needs Xmx argument #6
Comments
@denriquez what do you think? Do you know how much RAM is available on regular nodes? |
The standard nodes have 48Gb per 16 core node. @tgenahmet Which script is this exactly |
pegasus_splitNCigarReads.pbs would need the Xmx parameter. |
Note to self: Testing on old project TGRB_0009_ps201612251310 with job id 3497089. Trying standard node with -Xmx40G. Forgot bai file. Job ID now: 3498973 |
Unfortunately I have to re-open this issue. Regardless of Xmx parameter some splitNCigar jobs are failing because of not enough memory. I have even tried -maxInMemory 10000. (default is 150K) Still the same outcome. We can go back to using hmem nodes but this whole thread started because jobs were failing on hmem nodes as well. |
The funny thing is all of our samples also fail at chr12:66million position just like the example above. |
I tried excluding this region using the -XL option in GATK. It worked on one of the samples. I'm going to try it on about 8 other samples failing at the exact same spot. I don't know what's the deal with chr12 position 66million... This is the option I'm using: -XL 12:65000000-67000000 |
Doesn’t look to special for any specific reason. Do samples always fail or is it periodic?
- - -
Jonathan Keats
Director of Bioinformatics & Assistant Professor
Translational Genomics Research Institute
445 North Fifth Street, Phoenix, AZ, 85004
p: 602-343-8690 c: 480-543-0634
jkeats@tgen.org | www.keatslab.org | www.tgen.org
… On Mar 30, 2017, at 16:56, tgenahmet ***@***.***> wrote:
I tried excluding this region using the -XL option in GATK. It worked on one of the samples. I'm going to try it on about 8 other samples failing at the exact same spot. I don't know what's the deal with chr12 position 66million...
This is the option I'm using: -XL 12:65000000-67000000
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#6 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AL1RHFROK1mQQqdrJzlgID6vMM3n98cXks5rrBbggaJpZM4MGogu>.
--
*This electronic message is intended to be for the use only of the named
recipient, and may contain information that is confidential or privileged,
including patient health information. If you are not the intended
recipient, you are hereby notified that any disclosure, copying,
distribution or use of the contents of this message is strictly prohibited.
If you have received this message in error or are not the named recipient,
please notify us immediately by contacting the sender at the electronic
mail address noted above, and delete and destroy all copies of this
message. Thank you.*
|
Only some samples fail but always at that spot. Skipping that section of chr12 seems to work so far. |
This pbs script needs to define the Xmx argument. Maybe then it wouldn't need to be on an highmem node....?
" An error occurred because you did not provide enough memory to run this program. You can use the -Xmx argument (before the -jar argument) to adjust the maximum heap size provided to Java. Note that this is a JVM argument, not a GATK argument."
The text was updated successfully, but these errors were encountered: