You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The default pipe size on linux systems is 64K. This can be limiting. I recently worked on fastqsplitter. I decided to check the influence of pipe size with these two tweaks::
$ hyperfine -w 1 -r 10 'fastqsplitter big2.fastq.gz -n 3 -p test -b 48K'
Benchmark #1: fastqsplitter big2.fastq.gz -n 3 -p test -b 48K
Time (mean ± σ): 15.527 s ± 0.126 s [User: 39.864 s, System: 1.531 s]
Range (min … max): 15.314 s … 15.692 s 10 runs
We open 1 file and write 3. That means 4 open files. On one thread per file that means 4 threads. Given total time is (39.864+1.531) =41.4 seconds and runtime 15.527 seconds only 2.66 threads are used on average. I can see this in the task manager as well. None of the pigz -p 1 processes reaches 100%.
With pipe size tweaks:
$ hyperfine -w 3 -r 10 'fastqsplitter big2.fastq.gz -n 3 -p test -b 48K'
Benchmark #1: fastqsplitter big2.fastq.gz -n 3 -p test -b 48K
Time (mean ± σ): 8.544 s ± 0.067 s [User: 32.111 s, System: 1.141 s]
Range (min … max): 8.445 s … 8.643 s 10 runs
Notice how the total time is reduced by more than 7 seconds! Since the amount of compute work has not changed, this probably means that the program is actively waiting a lot (pipes block if they are empty).
also note how runtime * 4 ~= total time. All threads are almost fully saturated. The wall clock time is much better as a result.
For comparison, running in xopen mode with threads=0. So we are sure there are no pipes involved
$ hyperfine -w 3 -r 10 'fastqsplitter big2.fastq.gz -n 3 -p test -b 48K -t 0'
Benchmark #1: fastqsplitter big2.fastq.gz -n 3 -p test -b 48K -t 0
Time (mean ± σ): 30.990 s ± 0.198 s [User: 30.542 s, System: 0.393 s]
Range (min … max): 30.725 s … 31.307 s 10 runs
Notice how system time is much less because it does not need to write to pipes. Total overhead for pipes is about 700ms for system and 1500 ms for user with the pipe size tweaks. Which is acceptable.
With the default pipe size the overhead is much larger. Unacceptably so I would say. Default pipe size on linux is 64K. Default max pipe size on linux is 1024K. So setting all pipes to max pipe size only increases memory use with 1MB per output file. On modern systems this is practically unnoticable, while the speed improvements are substantial.
The text was updated successfully, but these errors were encountered:
I recently took some stackoverflow answers to increase the pipe size on linux and work them into a script. https://github.com/biowdl/mkbigfifo/
The default pipe size on linux systems is 64K. This can be limiting. I recently worked on fastqsplitter. I decided to check the influence of pipe size with these two tweaks::
The results are quite impressive:
with default pipe size.
We open 1 file and write 3. That means 4 open files. On one thread per file that means 4 threads. Given total time is (39.864+1.531) =41.4 seconds and runtime 15.527 seconds only 2.66 threads are used on average. I can see this in the task manager as well. None of the
pigz -p 1
processes reaches 100%.With pipe size tweaks:
Notice how the total time is reduced by more than 7 seconds! Since the amount of compute work has not changed, this probably means that the program is actively waiting a lot (pipes block if they are empty).
also note how runtime * 4 ~= total time. All threads are almost fully saturated. The wall clock time is much better as a result.
For comparison, running in xopen mode with threads=0. So we are sure there are no pipes involved
Notice how system time is much less because it does not need to write to pipes. Total overhead for pipes is about 700ms for system and 1500 ms for user with the pipe size tweaks. Which is acceptable.
With the default pipe size the overhead is much larger. Unacceptably so I would say. Default pipe size on linux is 64K. Default max pipe size on linux is 1024K. So setting all pipes to max pipe size only increases memory use with 1MB per output file. On modern systems this is practically unnoticable, while the speed improvements are substantial.
The text was updated successfully, but these errors were encountered: