New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error in optimize #1
Comments
I encountered a similar error message just a few minutes after starting the algorithm. I used a very sparse dataset with ~10000 cells.
Help appreciated. Thanks! |
Sorry for the late response. I will take a look and will let you know when it is fixed. |
SAVER v0.1.3 should be able to solve the issue. Please let me know if this same problem occurs. |
Thanks very much for addressing the issue. I tried to update the package but whatever I did packageVersion("SAVER") always returned 0.1.2. I am running Microsoft R Open 3.4.0. I tried the following:
none of it worked. EDIT: I found that the version name is not updated in the DESCRIPTION file so the installation probably worked but didn't solve the problem. I am still getting:
I sent you my dataset via email to make it easier to address the issue. Best |
Hi Max, Apologies. I forgot to update the version number in the DESCRIPTION file. However, it is still concerning that the issue is not resolved. I will try to run it on your dataset and will let you know how it goes. Mo |
Hi Max, Sorry for the lengthy turnaround. I updated the package to version 0.2.0, which I was able to run without errors on your dataset. Let me know if you're able to run it as well. Mo |
Hey Mo,
thank you so much for addressing the issue. I can't quiet get it to work
yet. I run the following commands:
library(doParallel)
library(SAVER)
my.data <- read.delim("star_gene_exon_tagged.dge.10681cells.txt.gz",
header = TRUE)
rownames(my.data) <- my.data[,1]
my.data <- my.data[,2:ncol(my.data)]
registerDoParallel(cores = 8)
my.data.normalized <- saver(my.data, parallel = TRUE)
It returns the following:
Removing 3 cells with zero expression. Calculating predictions... number of
observations in y (1) not equal to the number of rows of x (10678)argument
is not numeric or logical: returning NAnumber of observations in y (1) not
equal to the number of rows of x (10678)argument is not numeric or logical:
returning NAnumber of observations in y (1) not equal to the number of rows
of x (10678)argument is not numeric or logical: returning NAnumber of
observations in y (1) not equal to the number of rows of x (10678)argument
is not numeric or logical: returning NAnumber of observations in y (1) not
equal to the number of rows of x (10678)argument is not numeric or logical:
returning NAError in if (var(mu) == 0) { : missing value where TRUE/FALSE
needed
I run Microsoft R Open 3.4.0.
Since it worked in your environment. Maybe there is an issue with the way I
import the data?!
Best regards
Max
2017-07-24 16:38 GMT+02:00 mohuangx <notifications@github.com>:
… Hi Max,
Sorry for the lengthy turnaround. I updated the package to version 0.2.0,
which I was able to run without errors on your dataset. Let me know if
you're able to run it as well.
Mo
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AcsDxdNUT7dx8cqGgctShRH9PwLSUqqDks5sRKxwgaJpZM4ONxEy>
.
|
Hi Max, Try running the saver function on as.matrix(my.data), i.e., Mo |
Thanks for pointing that out! I must have changed that somewhere on the way
of playing around with the function. Now its running without throwing an
error so far.
2017-07-24 18:22 GMT+02:00 mohuangx <notifications@github.com>:
… Hi Max,
Try running the saver function on as.matrix(my.data), i.e.,
my.data.normalized <- saver(as.matrix(my.data), parallel = TRUE)
Mo
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AcsDxXi2REIDAKCJTppbEp_qz7vEBU8gks5sRMTOgaJpZM4ONxEy>
.
|
How long did my dataset run on your machine? It has been running for 40h
now on 8 cores. Thanks!
2017-07-24 18:51 GMT+02:00 Max Kaufmann <Max.Ka@gmx.de>:
… Thanks for pointing that out! I must have changed that somewhere on the
way of playing around with the function. Now its running without throwing
an error so far.
2017-07-24 18:22 GMT+02:00 mohuangx ***@***.***>:
> Hi Max,
>
> Try running the saver function on as.matrix(my.data), i.e.,
> my.data.normalized <- saver(as.matrix(my.data), parallel = TRUE)
>
> Mo
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#1 (comment)>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AcsDxXi2REIDAKCJTppbEp_qz7vEBU8gks5sRMTOgaJpZM4ONxEy>
> .
>
|
Thanks! Our initial tests indicate that the problem is resolved, @MaxKman FYI it takes approx 24h on 64 cores w/ ~8k detected genes. |
Hi Max, I only ran it on 100 genes on 10 cores, which took about 3 hours (although the posterior calculation was performed on all ~20,000 genes). I would guess it might take around 60-80 hours running on 8 cores. I'm currently working on ways to speed up the program so look out for improvements in the coming versions! |
Thank you for all the help so far. I led saver run over the weekend on 10 cores. When I returned for work I found the following error:
Help appreciated! Best |
Hi Max, Sorry for the inconvenience. Could you provide the command that you used to run saver and the version? Thanks, |
Hey Mo, I used saver 0.2.0 and the following commands:
The output was the following:
|
Hi Mo, I'm a collaborator of OP, and we ran into another error with SAVER. We got it to work on one dataset fine a couple weeks ago, but when we tried to use it on another dataset, we got the following error:
Any insights on how to fix this? Thanks! Nicole |
Hi Max, I ran SAVER 0.2.1 on a subset of the dataset and was able to get it to run without any errors. Could you try running it on a subset using your current version SAVER 0.2.0 to see if you get the same error and then try updating to SAVER 0.2.1? Sorry again for the repeated issues. Mo |
Hi Nicole, It appears that you're using an older version of SAVER. Try reinstalling SAVER and run it on a subset of the dataset to see if it works, and if it works then try running it on the full dataset. Please let me know if you are still getting an error. Mo |
Hi Mo, I ran a small subset of my dataset with SAVER 0.2.1 and it went through fine. After that I tried the whole dataset and it ran for 5 days on 10 cores when finally returning an error. See below:
The commands I used are the same as posted above. |
Hi Max, Sorry for the repeated errors. I will try running it on the entire dataset and will get back to you when it's finished. Mo |
Great, thank you! |
Hi Max, I updated SAVER to version 0.2.2 and was able to run it on your dataset without any problems. Hopefully it will finally work for you. Mo |
Hi Mo, thanks very much! I will attempt another run tomorow. Did you by any chance save the results from your run and could send it to me? Best |
Hi Max, Sure, I will email you a link. Mo |
Hi Mo, this time everything worked without throwing an error. I had to set nzero = 50 like you did for it to work though. Thank you for your help! Best |
Hi Max, Thanks for the response. Did it not work when nzero was not specified for SAVER version 0.2.2? Thanks, |
Exactly. Unfortunately I didn't log the error message this time but from what I remember it was similiar to the one before. |
Thanks Max for bringing this to my attention. I'll try and see what the problem is. |
Hi Mo, I got the same error as Max even though I set nzero (10 to match my other analyses). I verified that I was using version 0.2.2.
|
Hi Nicole, Sorry for the error. Do you mind sharing the dataset so that I can try to diagnose the problem? Thanks, |
Hi Mo, I emailed you yesterday with the dataset. Please let me know if you got it. Thanks! |
Hi Nicole, Thanks for emailing me the dataset! I'm currently running it and will hopefully identify the error soon. Mo |
Those types of errors are commonly seen in parallel computation when jobs die unexpectedly (ie due to lack of memory). Perhaps reducing the number of cores down from 64 would help? |
@fanli-gcb Thanks for the suggestion! I'll try that now, hopefully that will fix the problem. |
@fanli-gcb Thanks for pointing this out. Indeed, this seemed to be where the bottleneck was, since Reduce was being used to combine the list of lists, which is computationally intensive. I have updated the combine function to use unlist instead, which is much faster. The changes can be found in SAVER version 0.3.0. @nicolee-mctp I ran your dataset with SAVER version 0.3.0 and was able to get results without any issues. I sent you an email with a link to the results. |
I tried running SAVER on a dataset of ~1500 cells. After approx. 24h on 64 cores the program crashed with the following message:
Thanks!
The text was updated successfully, but these errors were encountered: