Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory usage issue #7

Open
bioinfograd opened this issue Apr 17, 2020 · 1 comment
Open

memory usage issue #7

bioinfograd opened this issue Apr 17, 2020 · 1 comment

Comments

@bioinfograd
Copy link

Hello, I have 11 samples from 1 patient so I am trying to run an nd analysis. I ran this successfully on 3 samples but when I increase, when "Estimating density for all MCMC iterations..." the vector being made is gigantic. This is being run on 300 mutations. Is there any way to reduce this or bypass this? I would love to get the the mutation clusters and CCF of each for each sample for the 1 patient.

Running all 11 samples error message:
Error in array(NA, c(gridsize, length(sampledIters))) :
vector is too large
Calls: RunDP ... multiDimensionalClustering -> Gibbs.subclone.density.est.nd -> array

Running all 10 samples error message:
Error: cannot allocate vector of size 13301026.9 Gb

Running all 9 samples error message:
Error: cannot allocate vector of size 738945.9 Gb

Any advise you can give would be greatly appreciated.

@bioinfograd
Copy link
Author

After going through it more, it would be helpful to add args for resolution and maxburden at the start when running dpclust_pipeline.R.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant