-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thousands of gridss_minimal_reproduction_data_for_error* #503
Comments
Additional observations on the jobs I'm seeing this on. Assembly step 8cpus
Are each of these reproduction files due to these messages: 2618x
1150 x
|
configurable with assembly.maximumReproductionExportPackages
Are you able to send through that error package containing only 14 reads? I should be able to identify and fix the root cause from a data set that small. |
That's unrelated and fixed in 2.12.0. That particular error should only be a warning as it just falls back to loading from the |
I found that rebuilding the gridsscache and img with the matched version suppressed this, but good to know |
|
Please feel free to drop me a line if you want me to try out an intermediate version, I believe I should be able to build the image easily since the recent changes to the Dockerfile. |
@d-cameron is there any update, do you need further test data? Sorry to chase. A release with the new config variable ( |
Is there any progress on this? If it's not possible to handle please can a release with |
Unfortunately, the file attached to the issue appears to be corrupt and I am unable to extract the reads from it. Could to attach another repo file that has the same issue? |
It's possible that GRIDSS got killed while it was still writing the minimal reproduction file resulting in an truncated output. Can you check to see if you have any fully-intact repo files? |
gridss_minimal_reproduction_data_for_error_844.zip I've done a round trip check that the uploaded file can be unpacked, sorry about that. |
Custom SAM tag collision with reads with In retrospect, using "aa" was not the best idea. Workaround is to run |
|
Sorry, I've just noticed you're going to handle this in code, I didn't spot the commit above. |
Is there any possibility of a hotfix release with this update? I understand this may be low priority for your internal use but it is exceptionally valuable to us. |
I'll push a new release out early next week. I've been working on clearing
the GRIDSS issues backlog and adding a few new features but some of that
has taken longer than expected.
…On Tue, Aug 3, 2021 at 11:13 PM Keiran Raine ***@***.***> wrote:
Is there any possibility of a hotfix release with this update? I
understand this may be low priority for your internal use but it is
exceptionally valuable to us.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#503 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABOBYOD5Z4QIXWL4UPXNQGDT27TPNANCNFSM46WZOW7A>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
Thank you, very much appreciated |
Hi,
Is there anyway to suppress the generation of these files and folders when an assembly step has issues?
gridss_minimal_reproduction_data_for_error*
I understand the intent, but I don't want them generating on a normal run only if I'm trying to debug.
I don't believe I'm doing anything unusual (2.11.1):
These are catastrophic when written to a non-local file system as the folders are ~2.1GB compressed to ~3MB. We're seeing this causing a significant I/O load on our lustre file system.
Thanks
The text was updated successfully, but these errors were encountered: