-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multithreading for scanpy.tl.rank_genes_group? #2390
Comments
Hey, In principle this sounds good, but I'd like to hear a little bit more about the usecase. For context on our side, there are some other paths for speeding up DE available (probably some form of calculating statistics via scverse/anndata#564). There're also increased momentum on more featureful DE in the scverse ecosystem. If you are specifically looking for faster scanpy DE, this makes sense, though there may be some easier paths forward (at least to me). If you need anything fancier or even just different, it could be good to check in with other efforts. E.g. |
Hi @ivirshup Thanks for the help In terms of the use cases here: (1) Any user doing data processing or interactive analysis could benefit from multithreading here. Consider the two big for-loops which through all of the genes between compared in the samples, and the for loop which automatically does this for each "group" in the ScanPy object. I'm a bit confused why Seurat or ScanPy never did this....but then I realize that Pagoda2 didn't either: https://github.com/kharchenkolab/pagoda2/blob/main/R/Pagoda2.R#L900 (There's a bit of multithreading there at the end...) Given the file sizes nowadays and the number of "groups", this is getting fairly computationally intensive. It's one of those simple things your biologists will love ("this is so fast now!"). (2) In terms of our use case, an interactive way to run DE via the client is too slow. We've just started to implement the above ourselves. RE: pertpy Could does this relate to @davidsebfischer and diffxpy? Best, Evan |
I agree it doesn't harm to have
Diffxpy is currently being reimplemented. Once it is released, it would likely be included in pertpy as an additional method. I.e. pertpy is more general and strives to provide a consistent interface to multiple methods. |
My concern is that there will be issues if you keep the current calculations, but parallelize over the groups. Within that loop, I believe large amounts of memory can be allocated. If it's "group vs rest", at least one If you parallelize over groups, now the max memory usage can be scanpy/scanpy/tools/_rank_genes_groups.py Lines 164 to 178 in d26be44
Another memory related concern comes from So while I think we can absolutely make use of more processing power here, I think we need to consider the approach.
What is the interface here? Scanpy computes results for all groups at once, but in most interfaces I've used you can only really "ask" for one comparison at a time. This could also be much faster, if you can just reduce total computation.
Partially, I'm not sure what comparisons are actually being run. I was also wondering if you'd benefit from something fancy like a covariate.
As a heads up, I'm unaware of a timeline here |
Hi ScanPy team
I emailed @ivirshup but others should be involved I think.
This function would be useful if we could specify the number of threads to use: https://scanpy.readthedocs.io/en/stable/generated/scanpy.tl.rank_genes_groups.html
Based on the number of items in the "groupby" field, we could use a basic split-merge approach here: each thread would take several of these items, the calculations are entirely independent of one another, and then when each is completed we would join + concatenate the results.
I'm happy to help write up a PR help (or participate), but I'd like to hear if this is something you'd be willing to prioritize. (It's related to a project whereby Fabian is the PI.)
Best, Evan
The text was updated successfully, but these errors were encountered: