-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any ideas about the runtime? #2
Comments
2.5 mil is quite a lot more than what I tested it with (up to 10k). But it tells you that 1722860 genes are empty, so you're actually "just" testing 800k genes.
From extrapolation I would say it should take around 40 min to run t-test and wilcox test on 800k genes with R=1 and with 1 core. Increase it to 2 cores and the run time is about half. When you have tested that it can run within an acceptable time, you can add other tests. If its RNAseq data try adding edgeR ("ere","ere2","erq","erq2") Cheers |
Hi again, I have added a new function to estimate runtime of the different methods; runtimeDA. This will tell you whether some methods are simply too slow on your dataset. You can then use the tests argument to specify the methods that you want to include. Cheers, |
Thanks :). (if it doesn't stop within the next week, I'll try with more cores; just need to monitor the RAM usage, to get an estimate) |
FYI: It ran 28 days on 10 cores, with a maximum of 112GB RAM. And then I made an error in my R script and the whole thing aborted, stupid me. |
Wow, well at least it finished. AUC of 0.6 is a bit low. You might wane to spike some more features when the dataset is that large. Also, you can subset to only fast methods. This for example: That being said, if you have the plot it should be fine. So long as it treated your predictor as categorical. If the code is as in your first post it would be wrong, you should wrap the predictor in as.factor(). Also, if you run it again, install DAtest again to get the latest version. Cheers |
Thanks :). I changed the predictor before I ran the test, because of what you said. The output is:
My R is not brilliant, but I can tell at least that the syntax itself is right, and I also checked in testDA.Rd, there don't seem to be any issues. |
Ah, it's because you have to enter "y" (without quotation marks) after you run the testDA function: When you're running with many cores (>10), you have to confirm the run. It's just an extra check, to ensure that a server is not accidentally overloaded with parallel workers. |
Aaaah, thanks. Has now finished. The AUC is still max 0.6, and since I now also got the table, I see that the spike in detection rate is constantly 0, except for 1 case, where it's 0.03333. |
You can just run like this: You should not expect to get very high AUC or spike.detect.rate for this size of data. You can try k=c(500,500,500) and effectSize = 5. As long as the spike.detect.rate is above zero for the method that has FPR < 0.05 and the highest AUC I would go with that. |
Hi everyone,
yes, this is not really an issue. Just wanted to ask if you have an idea what the runtime a normal analysis could be.
I started a test run (first time I'm trying this tool, so not sure what I'm doing), with admittedly pretty big input data (2.5 mil genes, 8 samples), and currently it's still running (22h).
Is this normal/possible, or should I be worried?
Code and output is currently this, and according to top my R instance is still running:
Unrelated: Greetings to Martin M., from whose poster I saw the link :).
The text was updated successfully, but these errors were encountered: