Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run time of chess sim module #20

Closed
magnitov opened this issue Nov 19, 2020 · 7 comments
Closed

Run time of chess sim module #20

magnitov opened this issue Nov 19, 2020 · 7 comments

Comments

@magnitov
Copy link

Hi,

I wonder what would be an approximate run time for sim module at different matrix resolutions, window steps and chromosomes (human genome)? Have you provided this data anywhere (haven't found it in the paper itself)?

Thanks,
Mikhail

@nickmachnik
Copy link
Collaborator

Hi Misha,
I don't think we have ever systematically checked this, @kaukrise @sgalan , please correct me if I am wrong.

Of the factors that you mentioned, the matrix resolution has the strongest influence on the runtime, simply because of the quadratic growth of the amount of data with decreasing bin size. The runtime should just approximately linearly depend on the number of windows (controlled by step and genome size). I am unsure about the effect of the window size; I think this mostly influences the runtime of the comparison step (probably quadratically), which involves subsetting the data that is already in memory and computing the similarity score; I guess the time here grows quadratically with the window size, but this step is very fast, so the overall runtime might not change so much because of this.
@kaukrise , do you have anything more helpful on this?

@nickmachnik
Copy link
Collaborator

nickmachnik commented Nov 19, 2020

In the paper we have one example in Extended Data Figure 6. chess sim took 100 seconds to process human chr19 on a single cpu on a Intel Xeon W, 3GHz with 128 gb ram, with a 1 mb window and 500 kb step size.
The data was from Bonev et al 2017, binned at 5 kb.

@kaukrise
Copy link
Collaborator

I don't have too much to add, except that the O/E calculation is by far the most time-consuming step. If you want to run many CHESS calculations on the same dataset, it is strongly recommended to convert your data beforehand. Either to FAN-C (you could use fanc from-cooler or fanc from-txt) or Juicer format, which is very flexible, or using the chess oe command if you are working with sparse TXT matrices.

@magnitov
Copy link
Author

Ok, thanks.

I am currently trying this on 20 and 50 kb resolution for the whole human genome, so let's see. What I noticed so far is that the longest part is loading reference/query contact matrices.

@kaukrise, yes, I had to switch to .hic format, since .cool is indeed very time-consuming. What I can actually suggest about this, is to use pre-computed obs/exp matrix for .cool format, which can be easily generated by cooltools: https://github.com/open2c/cooltools. Maybe you can consider this in a future.

@kaukrise
Copy link
Collaborator

If you already have O/E matrices, even in .cool format, you should be able to use the --oe-input flag to skip the O/E conversion, isn't that right @nickmachnik?

@magnitov
Copy link
Author

@kaukrise, as far as I know, .cool format does not store obs/exp counts, only raw/corrected matrix itself. Therefore, it is rather hard to overcome this and create a new .cool with obs/exp counts only.

@kaukrise
Copy link
Collaborator

Okay, then I misunderstood your comment. I thought cooltools allowed you to generate a .cool file that only contains O/E values (as matrix entries).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants