New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hi z_wr_iss while copying to non compressed drive #3077
Comments
this editor is weird.... |
me too; |
If you could run |
|
@behlendorf correct me if i'm wrong (I am not a programmer) the if yes, why isn't the raidz type parsed at import time and then call directly the corresponding function like vdev_raidz_generate_parity_p / vdev_raidz_generate_parity_pq / vdev_raidz_generate_parity_pqr ? |
referencing #3374 |
@mailinglists35 tour close but there's some subtlety here. The stats show that we're spending a lot of time in Disabling the compression entirely disables the zero detection which would normally automatically convert these zeros to a hole. See Exactly what behavior were you expecting to get when disabling compression? |
By disabling compression I expected the quantity of input data fed by dd into a file to match the data actually written on disk, in order to measure the array performance for sequential writes. I find sequential writes a bit slow on raidz3 despite the fact that the storage can withstand much greater values so I suspected that I am cpu bound; however I am unsure whether the CPU is really too slow for raidz3 calculations or the parity calculation routines could be improved. |
It definitely does look like your CPU bound and there is certainly room for performance improvements in the parity calculations. :) |
From a support standpoint, I would suggest checking that CPU throttling is disabled and your CPUs are actually running at full speed. You might be able to buy a bit more performance from it. |
cpu stays at max frequency:
How do I check for throttling? All I see is this:
and
PS: Hyperthreading is disabled. |
I’m also seeing this while transforming a VM template into a VM. So, it’s a 10G data copy from the pool to itself. The pool is almost empty (1.6T free, 2G total). |
note to self: |
this is still happening in 0.8.0-rc1 (rpm from zfs-test repo) on a pretty modern hardware and decent kernel version (oracle linux uek 4.1)
|
perf report zoom on the one with "self" column at 28% CPU
|
and the next two with 9%:
most other small end up also in |
I've a similar issue, I've a high load if I do some writes to the volume via iscsi (LIO) and its mostly osq_lock. I've an Intel E5-2640 v4, 256GB RAM
|
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
While copying a large file to a volume that was not compressed the z_wr_iss still maxed out.
This is iotop:
This is the layout of the zfs
I was copying to zdata/media over samba
This is the layout:
The text was updated successfully, but these errors were encountered: