added doubleMem functionality for LSF for lsf jobs that die due to reaching imposed memory limit #3313
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Often in genomics pipelines, especially for post-processing, memory limits of programs are not well-defined and can be dependent on data. This PR will add a memory doubling functionality if the flag --doubleMem is included on the command line for the LSF scheduler which will double the memory of and resubmit jobs that die due to reaching the LSF imposed memory limit. This functionality is incredibly useful for on-prem clusters. This addresses the issue opened by me: #3269