Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs very slow when removing files since upgrade to 0.6.5.2 (CentOS 6.6) #3870

Closed
mauromozzarelli opened this issue Oct 2, 2015 · 3 comments
Labels
Status: Inactive Not being actively updated Type: Performance Performance improvement or performance problem

Comments

@mauromozzarelli
Copy link

Removing a large file has become very slow since updating to 0.6.5.2.

Please see an strace of the rm on a 10GB file:

[root@sirius sirius]# strace -T rm brick0.f0.150914.xfsdump.bz2
execve("/bin/rm", ["rm", "brick0.f0.150914.xfsdump.bz2"], [/* 36 vars */]) = 0 <0.000355>
brk(0) = 0x1171000 <0.000166>
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2ce6976000 <0.000023>
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) <0.000064>
open("/opt/mqm/java/lib/tls/x86_64/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) <0.000039>
stat("/opt/mqm/java/lib/tls/x86_64", 0x7ffe7b142ba0) = -1 ENOENT (No such file or directory) <0.000023>
open("/opt/mqm/java/lib/tls/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) <0.000023>
stat("/opt/mqm/java/lib/tls", 0x7ffe7b142ba0) = -1 ENOENT (No such file or directory) <0.000023>
open("/opt/mqm/java/lib/x86_64/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) <0.000024>
stat("/opt/mqm/java/lib/x86_64", 0x7ffe7b142ba0) = -1 ENOENT (No such file or directory) <0.000022>
open("/opt/mqm/java/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) <0.000024>
stat("/opt/mqm/java/lib", {st_mode=S_IFDIR|0555, st_size=4096, ...}) = 0 <0.000023>
open("/etc/ld.so.cache", O_RDONLY) = 3 <0.000024>
fstat(3, {st_mode=S_IFREG|0644, st_size=155991, ...}) = 0 <0.000019>
mmap(NULL, 155991, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2ce694f000 <0.000023>
close(3) = 0 <0.000018>
open("/lib64/libc.so.6", O_RDONLY) = 3 <0.000030>
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\356\301]6\0\0\0"..., 832) = 832 <0.000045>
fstat(3, {st_mode=S_IFREG|0755, st_size=1926520, ...}) = 0 <0.000020>
mmap(0x365dc00000, 3750152, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x365dc00000 <0.000023>
mprotect(0x365dd8a000, 2097152, PROT_NONE) = 0 <0.000025>
mmap(0x365df8a000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18a000) = 0x365df8a000 <0.000037>
mmap(0x365df8f000, 18696, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x365df8f000 <0.000346>
close(3) = 0 <0.000019>
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2ce694e000 <0.000020>
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2ce694d000 <0.000020>
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f2ce694c000 <0.000020>
arch_prctl(ARCH_SET_FS, 0x7f2ce694d700) = 0 <0.000021>
mprotect(0x365df8a000, 16384, PROT_READ) = 0 <0.000030>
mprotect(0x365da1f000, 4096, PROT_READ) = 0 <0.000052>
munmap(0x7f2ce694f000, 155991) = 0 <0.000034>
brk(0) = 0x1171000 <0.000019>
brk(0x1192000) = 0x1192000 <0.000021>
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3 <0.000030>
fstat(3, {st_mode=S_IFREG|0644, st_size=99158576, ...}) = 0 <0.000018>
mmap(NULL, 99158576, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f2ce0abb000 <0.000023>
close(3) = 0 <0.000019>
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 <0.000056>
newfstatat(AT_FDCWD, "brick0.f0.150914.xfsdump.bz2", {st_mode=S_IFREG|0600, st_size=10619986199, ...}, AT_SYMLINK_NOFOLLOW) = 0 <0.000055>
geteuid() = 0 <0.000030>
unlinkat(AT_FDCWD, "brick0.f0.150914.xfsdump.bz2", 0) = 0 <111.167143>
close(0) = 0 <0.000028>
close(1) = 0 <0.000023>
close(2) = 0 <0.000032>
exit_group(0) = ?
+++ exited with 0 +++

At the time this was the only zfs operation on the pool.

nedbass pushed a commit to nedbass/zfs that referenced this issue Oct 2, 2015
Commit b39c22b set the READ_SYNC and WRITE_SYNC flags for a bio
based on the ZIO_PRIORITY_* flag passed in.  This had the unnoticed
side-effect of making the vdev_disk_io_start() synchronous for
certain I/Os.

This in turn resulted in vdev_disk_io_start() being able to
re-dispatch zio's which would result in a RCU stalls when a disk
was removed from the system.  Additionally, this could negatively
impact performance and explains the performance regressions reported
in both openzfs#3829 and openzfs#3780.

This patch resolves the issue by making the blocking behavior
dependent on a 'wait' flag being passed rather than overloading
the passed bio flags.

Finally, the WRITE_SYNC and READ_SYNC behavior is restricted to
non-rotational devices where there is no benefit to queuing to
aggregate the I/O.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue openzfs#3652
Issue openzfs#3780
Issue openzfs#3785
Issue openzfs#3817
Issue openzfs#3821
Issue openzfs#3829
Issue openzfs#3832
Issue openzfs#3870
@behlendorf behlendorf added the Type: Performance Performance improvement or performance problem label Oct 2, 2015
@behlendorf
Copy link
Contributor

@mauromozzarelli do you know how long this operation took for previous releases?

@mauromozzarelli
Copy link
Author

I believe it was almost immediate as it was never an issue. Since the upgrade it has been taking at times over 2 minutes to delete a 10gb file. This is happening on a raidz2 pool with 11 disks. Following my report I tested the same operation on a single disk pool ant it takes only 2 to 3 seconds, still bad, but with significantly lower impact.

DeHackEd pushed a commit to DeHackEd/zfs that referenced this issue Oct 4, 2015
Commit b39c22b set the READ_SYNC and WRITE_SYNC flags for a bio
based on the ZIO_PRIORITY_* flag passed in.  This had the unnoticed
side-effect of making the vdev_disk_io_start() synchronous for
certain I/Os.

This in turn resulted in vdev_disk_io_start() being able to
re-dispatch zio's which would result in a RCU stalls when a disk
was removed from the system.  Additionally, this could negatively
impact performance and explains the performance regressions reported
in both openzfs#3829 and openzfs#3780.

This patch resolves the issue by making the blocking behavior
dependent on a 'wait' flag being passed rather than overloading
the passed bio flags.

Finally, the WRITE_SYNC and READ_SYNC behavior is restricted to
non-rotational devices where there is no benefit to queuing to
aggregate the I/O.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue openzfs#3652
Issue openzfs#3780
Issue openzfs#3785
Issue openzfs#3817
Issue openzfs#3821
Issue openzfs#3829
Issue openzfs#3832
Issue openzfs#3870
@behlendorf
Copy link
Contributor

@DeHackEd that thought occurred to me but @mauromozzarelli mentioned he's seeing this with 0.6.5.2 which contains a fix for that regression. We'll need to profile this to see what's going on, but to my knowledge we've had no similar reports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

3 participants
@behlendorf @mauromozzarelli and others