Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hi z_wr_iss while copying to non compressed drive #3077

Closed
theking2 opened this issue Feb 5, 2015 · 18 comments
Closed

hi z_wr_iss while copying to non compressed drive #3077

theking2 opened this issue Feb 5, 2015 · 18 comments
Labels
Status: Inactive Not being actively updated Status: Stale No recent activity for issue Type: Performance Performance improvement or performance problem

Comments

@theking2
Copy link

theking2 commented Feb 5, 2015

While copying a large file to a volume that was not compressed the z_wr_iss still maxed out.
This is iotop:

Total DISK READ :     180.16 K/s | Total DISK WRITE :       9.18 M/s
Actual DISK READ:     197.68 K/s | Actual DISK WRITE:      18.98 M/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  420 be/7 root       62.56 K/s    0.00 B/s  0.00 % 87.40 % [z_wr_iss/2]
  419 be/7 root       47.54 K/s    0.00 B/s  0.00 % 86.82 % [z_wr_iss/1]
  418 be/7 root       70.06 K/s    0.00 B/s  0.00 % 84.58 % [z_wr_iss/0]

This is the layout of the zfs

# zfs get compression
NAME         PROPERTY     VALUE     SOURCE
zdata        compression  off       default
zdata/bkp    compression  lz4       local
zdata/home   compression  lz4       local
zdata/media  compression  off       default
zdata/sw     compression  off       default

I was copying to zdata/media over samba
This is the layout:

# zpool iostat -v
                                                              capacity     operations    bandwidth
pool                                                       alloc   free   read  write   read  write
---------------------------------------------------------  -----  -----  -----  -----  -----  -----
zdata                                                      2.32T  1.30T     10     86  14.9K  1.03M
  raidz1                                                   2.32T  1.30T     10     86  14.8K  1.03M
    ata-WDC_WD10EADS-00P8B0_WD-WMAVU0350318                    -      -      8     21  4.70K   358K
    ata-WDC_WD1003FBYX-01Y7B1_WD-WCAW36807048                  -      -      5     22  3.11K   357K
    ata-WDC_WD10EADS-00P8B0_WD-WMAVU0606317                    -      -      8     21  4.64K   358K
    ata-WDC_WD10EADS-00P8B0_WD-WCAVU0381916                    -      -      5     21  3.09K   357K
logs                                                           -      -      -      -      -      -
  ata-SAMSUNG_MMCRE64G5MXP-0VB_YC1F93G940SY940C0086-part5     4K  1016M      0      0    107      5
cache                                                          -      -      -      -      -      -
  ata-SAMSUNG_MMCRE64G5MXP-0VB_YC1F93G940SY940C0086-part6  4.68G  44.3G      3      7  4.48K   627K
---------------------------------------------------------  -----  -----  -----  -----  -----  -----
@theking2
Copy link
Author

theking2 commented Feb 5, 2015

this editor is weird....

@behlendorf behlendorf added the Type: Performance Performance improvement or performance problem label Feb 6, 2015
@mailinglists35
Copy link

me too;
z_wr_iss/0 goes 80%...95% cpu usage during sequential writing of zeroes to a file inside a compression=off dataset from an 8-disk raidz3 pool on 2 cores Xeon 3060 @ 2.40GHz (4800 bogomips)

@behlendorf
Copy link
Contributor

If you could run perf top during the test that should clearly show in which function the CPU time is being consumed.

@mailinglists35
Copy link

Samples: 385K of event 'cpu-clock', Event count (approx.): 7951639785
  41.81%  [kernel]            [k] vdev_raidz_generate_parity
   8.03%  [kernel]            [k] copy_user_generic_string
   6.43%  [kernel]            [k] _raw_spin_unlock_irqrestore
   4.91%  [kernel]            [k] fletcher_4_native
   4.57%  [kernel]            [k] __clear_user
   2.08%  [kernel]            [k] finish_task_switch
   1.98%  [kernel]            [k] scsi_request_fn
   1.81%  [kernel]            [k] __do_softirq
   1.72%  [kernel]            [k] mutex_lock
   1.08%  [kernel]            [k] _raw_spin_lock
   0.93%  [kernel]            [k] mutex_unlock
   0.90%  [kernel]            [k] get_page_from_freelist
   0.81%  [kernel]            [k] tick_nohz_idle_enter
   0.80%  [kernel]            [k] memmove
   0.76%  [kernel]            [k] spl_kmem_cache_alloc
   0.66%  [kernel]            [k] kmem_cache_alloc
   0.64%  [kernel]            [k] kmem_cache_free
   0.61%  [kernel]            [k] tick_nohz_idle_exit
   0.50%  [kernel]            [k] spl_kmem_cache_free
   0.49%  [kernel]            [k] free_hot_cold_page
   0.44%  [kernel]            [k] zio_create
   0.42%  [kernel]            [k] lz4_compress_zfs
   0.41%  [kernel]            [k] __free_pages
   0.35%  [kernel]            [k] zio_done
   0.34%  [kernel]            [k] kfree
   0.34%  [kernel]            [k] memset
   0.32%  [kernel]            [k] vdev_queue_io_to_issue
   0.31%  [kernel]            [k] __vdev_disk_physio
   0.30%  [kernel]            [k] native_read_tsc
no symbols found in /bin/dd, maybe install a debug package?
0-$ -bash  1$* -bash                                    Tue 23 Jun 2015  4:36:51
root@voyage:/tmp# dd if=/dev/zero bs=4M of=/srv/big/nocomp/deleteme
^C2658+0 records in
2658+0 records out
11148460032 bytes (11 GB) copied, 208.302 s, 53.5 MB/s

@mailinglists35
Copy link

@behlendorf correct me if i'm wrong (I am not a programmer)

the perf top means that vdev_raidz_generate_parity function is being called all the time?
so with each and every io, the raidz type is being checked and that takes so much time?

if yes, why isn't the raidz type parsed at import time and then call directly the corresponding function like vdev_raidz_generate_parity_p / vdev_raidz_generate_parity_pq / vdev_raidz_generate_parity_pqr ?

https://github.com/zfsonlinux/zfs/blob/544f7184f8541bbfd7c739f7e01fc9b5b6e57c5e/module/zfs/vdev_raidz.c#L725-L745

@kernelOfTruth
Copy link
Contributor

referencing #3374

@behlendorf
Copy link
Contributor

@mailinglists35 tour close but there's some subtlety here.

The stats show that we're spending a lot of time in vdev_raidz_generate_parity but not where you think. The vdev_raidz_generate_parity_p* functions have almost certainly been inlined by the compiler so they're not showing up in the perf top output. We're spending all of our time there calculating the parity over a bunch of zeros.

Disabling the compression entirely disables the zero detection which would normally automatically convert these zeros to a hole. See zio_write_bp_init()->zio_compress_data(). So this is to be expected. You've asked it to write zero's as fast as possible to disk by disabling all compression.

Exactly what behavior were you expecting to get when disabling compression?

@mailinglists35
Copy link

Exactly what behavior were you expecting to get when disabling compression?

By disabling compression I expected the quantity of input data fed by dd into a file to match the data actually written on disk, in order to measure the array performance for sequential writes.

I find sequential writes a bit slow on raidz3 despite the fact that the storage can withstand much greater values so I suspected that I am cpu bound; however I am unsure whether the CPU is really too slow for raidz3 calculations or the parity calculation routines could be improved.

@behlendorf
Copy link
Contributor

It definitely does look like your CPU bound and there is certainly room for performance improvements in the parity calculations. :)

@DeHackEd
Copy link
Contributor

From a support standpoint, I would suggest checking that CPU throttling is disabled and your CPUs are actually running at full speed. You might be able to buy a bit more performance from it.

@mailinglists35
Copy link

@DeHackEd,

checking that CPU throttling is disabled and your CPUs are actually running at full speed

cpu stays at max frequency:

Every 2.0s: grep -i hz /proc/cpuinfo                                   Fri Dec 25 01:58:29 2015

model name      : Intel(R) Xeon(R) CPU            3060  @ 2.40GHz
cpu MHz         : 2400.136
model name      : Intel(R) Xeon(R) CPU            3060  @ 2.40GHz
cpu MHz         : 2400.136

How do I check for throttling?

All I see is this:

root@zfs:~# cpufreq-info
cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009
Report errors and bugs to cpufreq@vger.kernel.org, please.
analyzing CPU 0:
  no or unknown cpufreq driver is active on this CPU
  maximum transition latency: 4294.55 ms.
analyzing CPU 1:
  no or unknown cpufreq driver is active on this CPU
  maximum transition latency: 4294.55 ms.

and

root@zfs:~# dmesg|grep -i gover
[    0.081669] cpuidle: using governor ladder
[    0.081716] cpuidle: using governor menu
root@zfs:~#

PS: Hyperthreading is disabled.

@alarig
Copy link

alarig commented May 15, 2016

I’m also seeing this while transforming a VM template into a VM. So, it’s a 10G data copy from the pool to itself. The pool is almost empty (1.6T free, 2G total).
During the operation, I saw all the CPU at 100 % and z_wr_iss was consuming it.
I have a height core Xeon X3450 without any cpufreq.

@mailinglists35
Copy link

behlendorf commented on Jun 23, 2015
If you could run perf top during the test that should clearly show in which function the CPU time is being consumed.

note to self: perf top in kernel 4.x is now perf top -e cycles

@mailinglists35
Copy link

this is still happening in 0.8.0-rc1 (rpm from zfs-test repo) on a pretty modern hardware and decent kernel version (oracle linux uek 4.1)

[root@bsynchq01 ~]# dd if=/dev/zero of=/dev/nvme1n1p2 bs=16M count=512 status=progress oflag=direct,sync,nonblock conv=fsync
7063207936 bytes (7.1 GB) copied, 4.022276 s, 1.8 GB/s
512+0 records in
512+0 records out
8589934592 bytes (8.6 GB) copied, 4.93092 s, 1.7 GB/s

[root@bsynchq01 ~]# parted /dev/nvme1n1 align-check opt 2
2 aligned

[root@bsynchq01 ~]# zpool create -o ashift=12 nvme1 /dev/nvme1n1p2 

[root@bsynchq01 ~]# dd if=/dev/zero of=/nvme1/delete bs=16M count=512 status=progress oflag=direct,sync,nonblock conv=fsync
8405385216 bytes (8.4 GB) copied, 10.114978 s, 831 MB/s
512+0 records in
512+0 records out
8589934592 bytes (8.6 GB) copied, 10.326 s, 832 MB/s

iostat during raw device write:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.05    0.00    1.51    3.46    0.00   94.98

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme1n1           0.00     0.00    0.00 13692.00     0.00 1752576.00   256.00    39.41    2.88    0.00    2.88   0.06  78.30


iostat during zfs write
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.05    0.00   22.18    0.75    0.00   77.02

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
nvme1n1           0.00     9.67    0.00 10885.00     0.00 976054.50   179.34    13.17    1.21    0.00    1.21   0.05  54.70


[root@bsynchq01 ~]# perf record -ag dd if=/dev/zero of=/nvme1/delete bs=16M count=512 status=progress oflag=direct,sync,nonblock conv=fsync

Samples: 50K of event 'cycles', Event count (approx.): 110581406332
  Children      Self  Command          Shared Object                       Symbol                                                                                         ◆
-   47.11%     0.00%  z_wr_iss         [kernel.kallsyms]                   [k] ret_from_fork                                                                              ▒
     ret_from_fork                                                                                                                                                        ▒
-   47.11%     0.00%  z_wr_iss         [kernel.kallsyms]                   [k] kthread                                                                                    ▒
     kthread                                                                                                                                                              ▒
     ret_from_fork                                                                                                                                                        ▒
-   47.07%     0.04%  z_wr_iss         [kernel.kallsyms]                   [k] taskq_thread                                                                               ▒
     taskq_thread                                                                                                                                                         ▒
     kthread                                                                                                                                                              ▒
     ret_from_fork                                                                                                                                                        ▒
+   46.17%     0.08%  z_wr_iss         [kernel.kallsyms]                   [k] zio_execute                                                                                ▒
-   34.14%     0.01%  z_wr_iss         [kernel.kallsyms]                   [k] zio_vdev_io_start                                                                          ▒
   - zio_vdev_io_start                                                                                                                                                    ▒
      - 99.04% zio_nowait                                                                                                                                                 ▒
         - vdev_mirror_io_start                                                                                                                                           ▒
           zio_vdev_io_start                                                                                                                                              ▒
           zio_execute                                                                                                                                                    ▒
           taskq_thread                                                                                                                                                   ▒
           kthread                                                                                                                                                        ▒
           ret_from_fork                                                                                                                                                  ▒
      - 0.95% zio_execute                                                                                                                                                 ▒
           taskq_thread                                                                                                                                                   ▒
           kthread                                                                                                                                                        ▒
           ret_from_fork                                                                                                                                                  ▒
+   34.08%     0.00%  z_wr_iss         [kernel.kallsyms]                   [k] vdev_mirror_io_start                                                                       ▒
+   33.83%     0.00%  z_wr_iss         [kernel.kallsyms]                   [k] zio_nowait                                                                                 ▒
+   33.60%     0.01%  z_wr_iss         [kernel.kallsyms]                   [k] vdev_queue_io                                                                              ▒
+   32.15%     0.30%  z_wr_iss         [kernel.kallsyms]                   [k] mutex_lock                                                                                 ▒
+   31.84%     0.01%  z_wr_iss         [kernel.kallsyms]                   [k] __mutex_lock_slowpath                                                                      ▒
+   31.76%     0.09%  z_wr_iss         [kernel.kallsyms]                   [k] mutex_optimistic_spin                                                                      ▒
+   28.69%    28.69%  z_wr_iss         [kernel.kallsyms]                   [k] osq_lock                                                                                   ▒
+   17.50%     0.00%  z_wr_int         [kernel.kallsyms]                   [k] ret_from_fork                                                                              ▒
+   17.50%     0.00%  z_wr_int         [kernel.kallsyms]                   [k] kthread                                                                                    ▒
+   17.49%     0.02%  z_wr_int         [kernel.kallsyms]                   [k] taskq_thread                                                                               ▒
+   17.22%     0.02%  z_wr_int         [kernel.kallsyms]                   [k] zio_execute                                                                                ▒
+   15.41%     0.01%  z_wr_int         [kernel.kallsyms]                   [k] zio_vdev_io_done                                                                           ▒
+   15.38%     0.02%  z_wr_int         [kernel.kallsyms]                   [k] vdev_queue_io_done                                                                         ▒
+   13.54%     0.00%  dd               [kernel.kallsyms]                   [k] system_call_fastpath                                                                       ▒
+   12.65%     0.25%  z_wr_int         [kernel.kallsyms]                   [k] mutex_lock                                                                                 ▒
+   12.40%     0.00%  z_wr_int         [kernel.kallsyms]                   [k] __mutex_lock_slowpath                                                                      ▒
+   12.35%     0.08%  z_wr_int         [kernel.kallsyms]                   [k] mutex_optimistic_spin                                                                      ▒
+   10.43%    10.43%  z_wr_int         [kernel.kallsyms]                   [k] osq_lock                                                                                   ▒
Press '?' for help on key bindings

[root@bsynchq01 ~]# lscpu 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                20
On-line CPU(s) list:   0-19
Thread(s) per core:    1
Core(s) per socket:    10
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Stepping:              4
CPU MHz:               2201.000
CPU max MHz:           2201.0000
CPU min MHz:           800.0000
BogoMIPS:              4400.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              14080K
NUMA node0 CPU(s):     0-19
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm flush_l1d constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida arat epb invpcid_single pln pts dtherm intel_pt ibrs stibp ibpb ssbd pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc


[root@bsynchq01 ~]# free 
              total        used        free      shared  buff/cache   available
Mem:      263522180    15501164   239147812     6576244     8873204   240646400
Swap:       4194300           0     4194300

@mailinglists35
Copy link

perf report zoom on the one with "self" column at 28% CPU

-   28.55%    28.55%  z_wr_iss         [kernel.kallsyms]                   [k] osq_lock                                                                                   ▒
   - osq_lock                                                                                                                                                             ▒
      - mutex_optimistic_spin                                                                                                                                             ▒
        __mutex_lock_slowpath                                                                                                                                             ▒
      - mutex_lock                                                                                                                                                        ▒
         - 89.45% vdev_queue_io                                                                                                                                           ▒
              zio_vdev_io_start                                                                                                                                           ▒
              zio_nowait                                                                                                                                                  ▒
              vdev_mirror_io_start                                                                                                                                        ▒
              zio_vdev_io_start                                                                                                                                           ▒
              zio_execute                                                                                                                                                 ▒
              taskq_thread                                                                                                                                                ▒
              kthread                                                                                                                                                     ▒
              ret_from_fork                                                                                                                                               ▒
         - 10.16% vdev_queue_io_to_issue                                                                                                                                  ▒
              vdev_queue_io                                                                                                                                               ▒
              zio_vdev_io_start                                                                                                                                           ▒
              zio_nowait                                                                                                                                                  ▒
              vdev_mirror_io_start                                                                                                                                        ▒
              zio_vdev_io_start                                                                                                                                           ▒
              zio_execute                                                                                                                                                 ▒
              taskq_thread                                                                                                                                                ▒
              kthread                                                                                                                                                     ▒
              ret_from_fork                                                                                  

@mailinglists35
Copy link

and the next two with 9%:

-    9.89%     9.89%  z_wr_int         [kernel.kallsyms]                   [k] osq_lock                                                                                   ▒
     osq_lock                                                                                                                                                             ▒
     mutex_optimistic_spin                                                                                                                                                ▒
     __mutex_lock_slowpath                                                                                                                                                ▒
   - mutex_lock                                                                                                                                                           ▒
      - 64.22% vdev_queue_io_done                                                                                                                                         ▒
           zio_vdev_io_done                                                                                                                                               ▒
           zio_execute                                                                                                                                                    ▒
           taskq_thread                                                                                                                                                   ▒
           kthread                                                                                                                                                        ▒
           ret_from_fork                                                                                                                                                  ▒
      - 35.11% vdev_queue_io_to_issue                                                                                                                                     ▒
           vdev_queue_io_done                                                                                                                                             ▒
           zio_vdev_io_done                                                                                                                                               ▒
           zio_execute                                                                                                                                                    ▒
           taskq_thread                                                                                                                                                   ▒
           kthread                                                                                                                                                        ▒
           ret_from_fork        
-    9.41%     9.41%  z_wr_iss         [kernel.kallsyms]                   [k] memcpy_erms                                                                                ▒
   - memcpy_erms                                                                                                                                                          ▒
      - 88.70% abd_iterate_func                                                                                                                                           ▒
           abd_copy_from_buf_off                                                                                                                                          ▒
           arc_write_ready                                                                                                                                                ▒
           zio_ready                                                                                                                                                      ▒
           zio_execute                                                                                                                                                    ▒
           taskq_thread                                                                                                                                                   ▒
           kthread                                                                                                                                                        ▒
           ret_from_fork                                                                                                                                                  ▒
      - 11.26% abd_iterate_func2                                                                                                                                          ▒
         - abd_copy_off                                                                                                                                                   ▒
            - 99.37% vdev_queue_io_to_issue                                                                                                                               ▒
                 vdev_queue_io                                                                                                                                            ▒
                 zio_vdev_io_start                                                                                                                                        ▒
                 zio_nowait                                                                                                                                               ▒
                 vdev_mirror_io_start                                                                                                                                     ▒
                 zio_vdev_io_start                                                                                                                                        ▒
                 zio_execute                                                                                                                                              ▒
                 taskq_thread                                                                                                                                             ▒
                 kthread                                                                                                                                                  ▒
                 ret_from_fork                                                                                                                                            ▒
            - 0.63% arc_write_ready                                                                                                                                       ▒
                 zio_ready                                                                                                                                                ▒
                 zio_execute                                                                                                                                              ▒
                 taskq_thread                                                                                                                                             ▒
                 kthread                                                                                                                                                  ▒
                 ret_from_fork        

most other small end up also in vdev_queue_io_to_issue and vdev_queue_io

@colttt
Copy link

colttt commented Oct 22, 2018

I've a similar issue, I've a high load if I do some writes to the volume via iscsi (LIO) and its mostly osq_lock. I've an Intel E5-2640 v4, 256GB RAM

uname -a
Linux zfs-serv3 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux

dpkg -l |grep zfs
ii  libzfs2linux                  0.7.9-3~bpo9+1                 amd64        OpenZFS filesystem library for Linux
ii  zfs-dkms                      0.7.9-3~bpo9+1                 all          OpenZFS filesystem kernel modules for Linux
ii  zfs-zed                       0.7.9-3~bpo9+1                 amd64        OpenZFS Event Daemon
ii  zfsutils-linux                0.7.9-3~bpo9+1                 amd64        command-line tools to manage OpenZFS filesystems

zpool status
  pool: vm_storage
 state: ONLINE
  scan: scrub repaired 0B in 0h20m with 0 errors on Sun Oct 14 00:44:05 2018
config:

        NAME           STATE     READ WRITE CKSUM
        vm_storage     ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            j3d03-hdd  ONLINE       0     0     0
            j4d03-hdd  ONLINE       0     0     0
          mirror-1     ONLINE       0     0     0
            j3d04-hdd  ONLINE       0     0     0
            j4d04-hdd  ONLINE       0     0     0
          mirror-2     ONLINE       0     0     0
            j3d05-hdd  ONLINE       0     0     0
            j4d05-hdd  ONLINE       0     0     0
          mirror-3     ONLINE       0     0     0
            j3d06-hdd  ONLINE       0     0     0
            j4d06-hdd  ONLINE       0     0     0
          mirror-4     ONLINE       0     0     0
            j3d07-hdd  ONLINE       0     0     0
            j4d07-hdd  ONLINE       0     0     0
          mirror-5     ONLINE       0     0     0
            j3d08-hdd  ONLINE       0     0     0
            j4d08-hdd  ONLINE       0     0     0
          mirror-6     ONLINE       0     0     0
            j3d09-hdd  ONLINE       0     0     0
            j4d09-hdd  ONLINE       0     0     0
          mirror-7     ONLINE       0     0     0
            j3d10-hdd  ONLINE       0     0     0
            j4d10-hdd  ONLINE       0     0     0
          mirror-8     ONLINE       0     0     0
            j3d11-hdd  ONLINE       0     0     0
            j4d11-hdd  ONLINE       0     0     0
          mirror-9     ONLINE       0     0     0
            j3d12-hdd  ONLINE       0     0     0
            j4d12-hdd  ONLINE       0     0     0
          mirror-10    ONLINE       0     0     0
            j3d13-hdd  ONLINE       0     0     0
            j4d13-hdd  ONLINE       0     0     0
          mirror-11    ONLINE       0     0     0
            j3d14-hdd  ONLINE       0     0     0
            j4d14-hdd  ONLINE       0     0     0
          mirror-12    ONLINE       0     0     0
            j3d15-hdd  ONLINE       0     0     0
            j4d15-hdd  ONLINE       0     0     0
          mirror-13    ONLINE       0     0     0
            j3d16-hdd  ONLINE       0     0     0
            j4d16-hdd  ONLINE       0     0     0
          mirror-14    ONLINE       0     0     0
            j3d17-hdd  ONLINE       0     0     0
            j4d17-hdd  ONLINE       0     0     0
          mirror-15    ONLINE       0     0     0
            j3d18-hdd  ONLINE       0     0     0
            j4d18-hdd  ONLINE       0     0     0
          mirror-16    ONLINE       0     0     0
            j3d19-hdd  ONLINE       0     0     0
            j4d19-hdd  ONLINE       0     0     0
        logs
          mirror-17    ONLINE       0     0     0
            j3d00-ssd  ONLINE       0     0     0
            j4d00-ssd  ONLINE       0     0     0
        cache
          j3d02-ssd    ONLINE       0     0     0
          j4d02-ssd    ONLINE       0     0     0

errors: No known data errors

image

@stale
Copy link

stale bot commented Aug 25, 2020

This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Status: Stale No recent activity for issue label Aug 25, 2020
@stale stale bot closed this as completed Nov 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated Status: Stale No recent activity for issue Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

8 participants
@behlendorf @theking2 @DeHackEd @mailinglists35 @kernelOfTruth @colttt @alarig and others