You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run xtea on a series of samples on HPC. However, in"/tmp/cns" folder, too many temp files were made (e.g. 240k files in certain "Alu/tmp/cns" folder, e.g. ending in ".disc_pos/.clipped.fq"). I am working on illumina data on a HPC cluster that has a limit on total file counts (inode count). So my job task is cancelled by the cluster as the file count limit is reached.
So my question is:
Is it normal that 200k temp files were made in the process? (I see that --clean option exists for long reads, but I'm working with short read illumina, so that is not applicable)
Do you have any other suggests to tackle my cluster file count limit please?
The text was updated successfully, but these errors were encountered:
Most of the intermediate files will be deleted in the end. It seems you are running on a bam having lots of clipped reads. One thing you can try is run with user specific parameters (with --user option) and set higher cutoff.
Hi Simon,
I'm trying to run xtea on a series of samples on HPC. However, in"/tmp/cns" folder, too many temp files were made (e.g. 240k files in certain "Alu/tmp/cns" folder, e.g. ending in ".disc_pos/.clipped.fq"). I am working on illumina data on a HPC cluster that has a limit on total file counts (inode count). So my job task is cancelled by the cluster as the file count limit is reached.
So my question is:
The text was updated successfully, but these errors were encountered: