Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include config file for large raw files? #169

Open
thorellk opened this issue May 19, 2021 · 11 comments
Open

Include config file for large raw files? #169

thorellk opened this issue May 19, 2021 · 11 comments

Comments

@thorellk
Copy link
Collaborator

I just bumped into the issue of having excessively large raw fastq files per sample which leads to that fastp times out with the current allocations in the rackham config. Since this is a quite unusual situation, I don't think it is necessary to change the default one but it would be good to have one more for "fat" datasets. What do you think?

@boulund
Copy link
Member

boulund commented May 19, 2021

It would be nice if we could decouple the environment specifications from the time specifications somehow. We would encounter the same issue in the gandalf config as well if we had such large input files, and it feels like it could quickly get messy if we need to keep multiple versions of each system profile.

@abhi18av
Copy link
Collaborator

Hello team,

I am not sure how the cluster environments work, but perhaps we could explore computing the time limit dynamically?

Something like (pseudocode)

time   =  20.m   *  task.attempt

@thorellk
Copy link
Collaborator Author

Yeah, that would be an alternative @abhi18av, if one can make some benchmark how much time it needs per 100 Mb input file or something. It should "only" be fastp, shovill and Kraken that will be affected by input file size...

@boulund
Copy link
Member

boulund commented May 21, 2021

I think it should be possible to implement it so it actually reads the size of the input file for the process and computes the time allocation based on that. I think that, perhaps in combination with an extension on failures, would make sense and make it more effective. The alternative that has been proposed already would unnecessarily spend core hours for users with mainly (too) large samples on failed attempts that would then be increased only after failing.

@abhi18av
Copy link
Collaborator

abhi18av commented Jun 9, 2021

Hi team,

I came across this possible solution somewhere else and perhaps we could explore this here

  time { 20.m * sample.size() }

@boulund
Copy link
Member

boulund commented Jun 9, 2021

Neat. I wasn't aware of that functionality!

I guess we would need to make some simple calculation using the sample size to come up with a good multiplier for the time, perhaps also modulo some value so we don't end up with weird allocation request. There should also be a minimum size allocation as well I think :). Can you guys help me come up with something that would make sense? I'm thinking a "normal" sample would result in a time allocation of 20.m (i.e. the multiplier would be 1), but larger samples would increase in whole integer steps depending on the size of the sample file to 20.m * 2 (medium sized file), 20.m * 3 (large file) etc.

Not sure if sample.size() would work in our context, as there is no object in the FASTP process definition called sample. We might have to see if it works with path objects instead of file (i.e. reads[0].size()), or consider rewriting the process definition slightly to use file instead of path.

@abhi18av
Copy link
Collaborator

abhi18av commented Jun 9, 2021

We might have to see if it works with path objects instead of file (i.e. reads[0].size())

We could rely on toFile method for a java Path object.

Though, I think it'd be best if we encapsulate this functionality into a function or a closure in Groovy.

The final solution would look something like

  time computeTime()

@boulund
Copy link
Member

boulund commented Jun 10, 2021

Would someone have time to prototype something around this?

@abhi18av
Copy link
Collaborator

abhi18av commented Jun 10, 2021

I think that the function should look something like this, however I can't think of a way I can test this on my infra.

def computeTime (inputPathObject) {
    fileObject = inputPathObject.toFile()
    fileSize = fileObject.size()
    factor = fileSize % 3

    if(factor == 0) {
      return 20.m
    }

    return 20.m * factor
}

This function might need to be adapted based on the test runs.

@thorellk
Copy link
Collaborator Author

thorellk commented Jun 10, 2021 via email

@abhi18av
Copy link
Collaborator

Congrats to @emilio-r ! 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants