You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working definition: Obtain execution time and memory usage in MiB for method execution. If multiple rules or samples for method: sum time and get max MiB.
The text was updated successfully, but these errors were encountered:
What we need to think about to properly benchmark the methods:
Which dataset, resp. samples, to use?
How many replicates to use (if any) and how to report them? E.g. box plots for 10 executions.
On which platform should it be tested? This also influences how the results can be obtained, e.g. when using AWS there is propably a way to automatically obtain the compute usages.
The benchmark should be irrespective of the implementation of the workflow. That is, using a nextflow or snakemake specific approach could lead to systematic differences based on their implementation.
Create specification for Q1.
Working definition: Obtain execution time and memory usage in MiB for method execution. If multiple rules or samples for method: sum time and get max MiB.
The text was updated successfully, but these errors were encountered: