Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset not integrating when increasing features. #25

Closed
kasumaz opened this issue Aug 20, 2023 · 4 comments
Closed

Dataset not integrating when increasing features. #25

kasumaz opened this issue Aug 20, 2023 · 4 comments

Comments

@kasumaz
Copy link

kasumaz commented Aug 20, 2023

Dear Stacas developers,

The integration method works great. At least on a dataset that isnt too big.

I am trying to integrate a large dataset with 450K cells. The problem with this dataset is the expression of lowly expressed genes in a smaller disease-specific population. If I use Anchor features of 5000, it works but I dont capture the genes that are lowly expressed. I would like to try to go up to 15K features to make sure I capture them in. But when integrating like this:

Idents(object = obj) <- "integration_col2"
#DefaultAssay(object = obj) <- "RNA"
obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%
Run.STACAS(dims = 1:20, anchor.features = 7500, cell.labels = "integration_col2") %>%
RunUMAP(dims = 1:10)

I get the following error message:

Finding integration vector weights
0% 10 20 30 40 50 60 70 80 90 100%
[----|----|----|----|----|----|----|----|----|----|
**************************************************|
Integrating data
Error in h(simpleError(msg, call)) :
error in evaluating the argument 'x' in selecting a method for function 't': Cholmod error 'problem too large' at file ../Core/cholmod_sparse.c, line 89
In addition: Warning messages:
1: In asMethod(object) :
sparse->dense coercion: allocating vector of size 8.4 GiB
2: In asMethod(object) :
sparse->dense coercion: allocating vector of size 3.3 GiB
3: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.4 GiB
4: In asMethod(object) :
sparse->dense coercion: allocating vector of size 3.7 GiB
5: In asMethod(object) :
sparse->dense coercion: allocating vector of size 2.0 GiB
6: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.3 GiB
7: In asMethod(object) :
sparse->dense coercion: allocating vector of size 1.4 GiB
8: In asMethod(object) :
sparse->dense coercion: allocating vector of size 2.3 GiB

Its a memory issue. I am using an instance like this one:

unibi highmem 2xlarge: 56 VCPUs - 933 GB RAM - 50 GB root disk
Image: RStudio-ubuntu20.04 de.NBI (2023-08-17)

This is the max it can go to.

Is there a way to further process in order to save on RAM? I noticed that the newer integrated counts matrix as in numeric counts with several numbers and decimal places. I wonder if its possible to do something at this step or in any other step to save on RAM. Basically, any tips would be great to the point where I can see those rarer genes appearing in corrected integrated counts matrix.

Thanks very much.

@mass-a
Copy link
Member

mass-a commented Aug 21, 2023

Hello,

in general I would be very careful of using more than 2000-3000 genes as variable features. It is true that increasing this number will include more lowly expressed genes, but it will also include many uninformative genes, resulting in a noisy selection of variable genes. If your rare genes do not appear in the top 2000-3000 most variable genes, it is unlikely they will significantly contribute in defining the integrated space.

This said, if you are interested in obtaining corrected counts for all genes as per your question, you could try the following:

  1. Run FindAnchors.STACAS() with a reasonable number of variable genes (e.g. 2000);
  2. Run IntegrateData.STACAS() by specifying features.to.integrate = rownames(unintegrated.object), i.e. you ask to calculate corrected counts for all genes in the original object. Or you may simply manually add to this list the genes you are interested in, if this still runs you out of memory.

I hope this helps,
-m

@kasumaz
Copy link
Author

kasumaz commented Aug 21, 2023

Hi there,
Thanks a lot for your fast reply,

I adjusted the code a bit. I dont get an error when I run (at least initially):

obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%
IntegrateData.STACAS(dims = 1:20, features.to.integrate = v) %>%
RunUMAP(dims = 1:10)

v is a character vector containing all of the gene symbols.

But when i run:

obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%

  • IntegrateData.STACAS(dims = 1:20, features.to.integrate = v, cell.labels = "integration_col2") %>%
  • RunUMAP(dims = 1:10)
    Error in IntegrateData.STACAS(., dims = 1:20, features.to.integrate = v, :
    unused argument (cell.labels = "integration_col2")

I would liked to have ran it semi-supervised. I see in the documents for the function that there is a way to run it semi supervised by mentioning 'semisupervised = TRUE,' - but for IntegrateData.STACAS() there isnt a way to specify a metadata column to help with integration?

The problem with this dataset is that its like an atlas one with many different cell types, so the corrected counts for 2000 anchors would end up giving only maybe only a fraction of the possible deferentially expressed genes between each cell type.

@mass-a
Copy link
Member

mass-a commented Aug 21, 2023

Hi,
as I mentioned in my previous comment, you need to first calculate the integration anchors, before running IntegrateData.STACAS(). Using your code:

obj_integrated <- obj %>% SplitObject(split.by = "Dataset") %>%
FindAnchors.STACAS(cell.labels="integration_col2", anchor.features = 3000) %>%
IntegrateData.STACAS(dims = 1:20, features.to.integrate = v, semisupervised=TRUE) %>%
RunPCA(npcs=20) %>%
RunUMAP(dims = 1:20)

The idea is that cell labels are used to calculate integration anchors between cells with the same type (or without labels); the resulting anchors are used by the integration function to calculate the joint embedding.

(Run.STACAS() is a wrapper for several of these commands, but if you want more control on the results you can run them separately as above).

@kasumaz
Copy link
Author

kasumaz commented Aug 22, 2023

Thanks a lot for your help. Your instructions were clear.
Yeah, the rarer lowly expressed genes in the transient disease population I am working with, can only be observed if I include it with the list of variable genes. They do show up in bulk-seq work in similar cells. This is the best I can do for now with the cloud resource I have. Otherwise, I look forward to seeing more tools from your lab.

@mass-a mass-a closed this as completed Sep 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants