Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

temp files burst out #103

Closed
sidi-yang opened this issue Apr 10, 2024 · 2 comments
Closed

temp files burst out #103

sidi-yang opened this issue Apr 10, 2024 · 2 comments

Comments

@sidi-yang
Copy link

Hi Simon,

I'm trying to run xtea on a series of samples on HPC. However, in"/tmp/cns" folder, too many temp files were made (e.g. 240k files in certain "Alu/tmp/cns" folder, e.g. ending in ".disc_pos/.clipped.fq"). I am working on illumina data on a HPC cluster that has a limit on total file counts (inode count). So my job task is cancelled by the cluster as the file count limit is reached.

So my question is:

  1. Is it normal that 200k temp files were made in the process? (I see that --clean option exists for long reads, but I'm working with short read illumina, so that is not applicable)
  2. Do you have any other suggests to tackle my cluster file count limit please?
@simoncchu
Copy link
Collaborator

Most of the intermediate files will be deleted in the end. It seems you are running on a bam having lots of clipped reads. One thing you can try is run with user specific parameters (with --user option) and set higher cutoff.

@sidi-yang
Copy link
Author

I solved this problem by editing the path of cns folder to scratch place:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants