New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continue analysis even when individual files fail the filtering threshold #641
Conversation
Test data in nf-core/test-datasets#1008 |
|
Thanks! That looks indeed elegant, going to test it today and give feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks that was a good idea to change the ouput of dada2_filtntrim
in that way!
The metadata however makes the test_failed
fail for me, easily fixable though.
Changelog could also get an update, you can add yourself to the contributors in the readme credits section as well if you want.
Ok, I think I'm in good shape now, @d4straub
Let me know what you think! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great thanks, all looks good.
However, there is one more problem (apologies for not having predicted this earlier, that is a relatively new feature). For the new way of testing with github, at least a "failed.nf.test" in tests/pipeline/
seems required. That file specifies tests that output files are present, and can in conjunction with "failed.nf.test.snap" even check for md5sums. I think the presence/md5sum is only needed for central files, here probably
cutadapt/cutadapt_summary.tsv
barrnap/summary.tsv
dada2/DADA2_table.tsv
overall_summary.tsv
But I can add that also later, so I approve anyway (I think you cannot merge the PR with failing tests, let me know if you want me to merge it). However, it would be nice if you want to still add that last piece!
The test validation piece is not something I'm familiar with, so I might just vote for merging as-is and adding in those additional files in a subsequent PR. It does appear that I cannot merge with the tests failing. If you could squash and merge that would be greatly appreciated! Thanks also for all your help as I was getting this together @d4straub. My first real contribution to nf-core! |
This PR addresses the issue that when samples fail to pass the filtering threshold, they will:
I've added a test case to reproduce the bug, and the code I've added should address it.
This should be a better solution than what is outlined in #638 (which I'm now closing)
PR checklist
nf-core lint
).nextflow run . -profile test,docker --outdir <OUTDIR>
).docs/usage.md
is updated.docs/output.md
is updated.CHANGELOG.md
is updated.README.md
is updated (including new tool citations and authors/contributors).