Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lots of memory usage when running evol_indices with many sequences #10

Closed
brycejoh16 opened this issue Jan 16, 2023 · 1 comment
Closed

Comments

@brycejoh16
Copy link

Hi EVE team,

I'm running compute_evol_indices.py on a dataset with many variants in a single csv file(>400k, specifically UniProt ID SPG1_STRSG_Olson_2014).

When I try to compute evolutionary indices of these variants it requires over 100GB of memory, and my job stalls out. I think pytorch maybe keeping in memory previously computed batches, because one batch only requires roughly 1GB of memory.

It's easy to fix this issue simply by breaking up the dataset, but rather inconvenient, so it would be great if this issue could be fixed.

Let me know if this issue makes sense, and if it is reproducible.

Take care,
Bryce

@pascalnotin
Copy link
Collaborator

Dear Bryce,

Computing the evolutionary indices will require creating a prediction_matrix which size directly depends on a) the number of mutants you want to compute scores for b) the number of samples from the approximate posterior of the VAE. Additionally, since we populate this matrix batch by batch, the batch_size parameter will also play a role in the total memory used during scoring.

Based on your note, it seems that the batch_size is less of an issue but it is rather the sheer size of the prediction_matrix that takes a hit on memory, in particular due to the very large number of mutants in the SPG1_STRSG_Olson_2014 assay.
If you do not care about the standard deviation of scores across samples (for which having access to the full matrix is handy), then there is a very simple fix that would consist of simply using a vector of size num_mutants (instead of prediction_matrix of size num_mutants * num_samples) and sum the scores across samples instead of persisting every score values across samples in the matrix. This should significantly reduce the memory footprint, will have no impact on the average scores per mutant and will not require you to break up the dataset. You would lose the ability to easily compute the standard deviation across samples though, which is why we coded things that way.

Kind regards,
Pascal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants