-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dataset not integrating when increasing features. #25
Comments
Hello, in general I would be very careful of using more than 2000-3000 genes as variable features. It is true that increasing this number will include more lowly expressed genes, but it will also include many uninformative genes, resulting in a noisy selection of variable genes. If your rare genes do not appear in the top 2000-3000 most variable genes, it is unlikely they will significantly contribute in defining the integrated space. This said, if you are interested in obtaining corrected counts for all genes as per your question, you could try the following:
I hope this helps, |
Hi there, I adjusted the code a bit. I dont get an error when I run (at least initially): obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>% v is a character vector containing all of the gene symbols. But when i run:
I would liked to have ran it semi-supervised. I see in the documents for the function that there is a way to run it semi supervised by mentioning 'semisupervised = TRUE,' - but for IntegrateData.STACAS() there isnt a way to specify a metadata column to help with integration? The problem with this dataset is that its like an atlas one with many different cell types, so the corrected counts for 2000 anchors would end up giving only maybe only a fraction of the possible deferentially expressed genes between each cell type. |
Hi, obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%
FindAnchors.STACAS(cell.labels="integration_col2", anchor.features = 3000) %>%
IntegrateData.STACAS(dims = 1:20, features.to.integrate = v, semisupervised=TRUE) %>%
RunPCA(npcs=20) %>%
RunUMAP(dims = 1:20) The idea is that cell labels are used to calculate integration anchors between cells with the same type (or without labels); the resulting anchors are used by the integration function to calculate the joint embedding. ( |
Thanks a lot for your help. Your instructions were clear. |
Dear Stacas developers,
The integration method works great. At least on a dataset that isnt too big.
I am trying to integrate a large dataset with 450K cells. The problem with this dataset is the expression of lowly expressed genes in a smaller disease-specific population. If I use Anchor features of 5000, it works but I dont capture the genes that are lowly expressed. I would like to try to go up to 15K features to make sure I capture them in. But when integrating like this:
Idents(object = obj) <- "integration_col2"
#DefaultAssay(object = obj) <- "RNA"
obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%
Run.STACAS(dims = 1:20, anchor.features = 7500, cell.labels = "integration_col2") %>%
RunUMAP(dims = 1:10)
I get the following error message:
Finding integration vector weights
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Integrating data
Error in h(simpleError(msg, call)) :
error in evaluating the argument 'x' in selecting a method for function 't': Cholmod error 'problem too large' at file ../Core/cholmod_sparse.c, line 89
In addition: Warning messages:
1: In asMethod(object) :
sparse->dense coercion: allocating vector of size 8.4 GiB
2: In asMethod(object) :
sparse->dense coercion: allocating vector of size 3.3 GiB
3: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.4 GiB
4: In asMethod(object) :
sparse->dense coercion: allocating vector of size 3.7 GiB
5: In asMethod(object) :
sparse->dense coercion: allocating vector of size 2.0 GiB
6: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.3 GiB
7: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.4 GiB
8: In asMethod(object) :
sparse->dense coercion: allocating vector of size 2.3 GiB
Its a memory issue. I am using an instance like this one:
unibi highmem 2xlarge: 56 VCPUs - 933 GB RAM - 50 GB root disk
Image: RStudio-ubuntu20.04 de.NBI (2023-08-17)
This is the max it can go to.
Is there a way to further process in order to save on RAM? I noticed that the newer integrated counts matrix as in numeric counts with several numbers and decimal places. I wonder if its possible to do something at this step or in any other step to save on RAM. Basically, any tips would be great to the point where I can see those rarer genes appearing in corrected integrated counts matrix.
Thanks very much.
The text was updated successfully, but these errors were encountered: