Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disk_wait displayed by zpool iostat depends on sampling interval #7694

Closed
ak100 opened this issue Jul 9, 2018 · 11 comments
Closed

disk_wait displayed by zpool iostat depends on sampling interval #7694

ak100 opened this issue Jul 9, 2018 · 11 comments

Comments

@ak100
Copy link

ak100 commented Jul 9, 2018

System information

Type Version/Name
Distribution Name Scientific Linux Fermi release 6.9 (Ramsey)
Linux Kernel 2.6.32-696.23.1.el6.x86_64
ZFS Version 0.7.9-1
SPL Version 0.7.9-1

Describe the problem you're observing

disk_wait delay displayed by zpool iostat depends on sampling interval of iostat (it must not).
The load is static during the test. I'm reading 16 10GB files with dd; all samples were taken during the same run.

There is computation error in disk wait time: it shall not depend on measurement or display options.
syncq_wait/asyncq_wait may need check too.

Displayed disk wait vs sampling interval :

interv. wait time.
5 sec 8 ms
2 sec 20 ms
1 sec 40 ms
500 msec 65 ms
100 msec ~240 ms (which is > sampling interval)

Describe how to reproduce the problem

Read 10 GB files in 16 parallel streams to create contention in reads:

   for f in {0..15} ; do dd bs=1M if=/data1/test-1M/dd/f.20.$f of=/dev/null & done

zpool is raidz2 on 12 HDD.

Run zpool iostat with sampling interval 5 sec; 2 sec; 1 sec; 0.5 sec; 0.1 sec:

# for x in 5 2 1 0.5 0.1 ; do zpool iostat -y -l data1 $x 5 ; done
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  7.95K      0   814M      0    8ms      -    8ms      -  446ns      -    9us      -      -
data1       1.48T  85.5T  7.83K      0   801M      0    8ms      -    8ms      -  779ns      -    1us      -      -
data1       1.48T  85.5T  7.79K      0   797M      0    8ms      -    8ms      -  335ns      -  573ns      -      -
data1       1.48T  85.5T  7.82K      0   799M      0    8ms      -    8ms      -  740ns      -    7us      -      -
data1       1.48T  85.5T  7.81K      0   799M      0    7ms      -    7ms      -  416ns      -   23us      -      -
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  8.30K      0   849M      0   15ms      -   15ms      -  767ns      -  826ns      -      -
data1       1.48T  85.5T  8.37K      0   856M      0   19ms      -   19ms      -  907ns      -  892ns      -      -
data1       1.48T  85.5T  7.94K      0   812M      0   21ms      -   21ms      -    1us      -   44us      -      -
data1       1.48T  85.5T  8.03K      0   822M      0   20ms      -   20ms      -    3us      -  864ns      -      -
data1       1.48T  85.5T  8.39K      0   857M      0   18ms      -   18ms      -    2us      -  848ns      -      -
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  7.94K      0   812M      0   32ms      -   32ms      -      -      -    1us      -      -
data1       1.48T  85.5T  7.44K      0   761M      0   43ms      -   43ms      -    2us      -   96us      -      -
data1       1.48T  85.5T  7.48K      0   766M      0   41ms      -   41ms      -      -      -   10us      -      -
data1       1.48T  85.5T  8.07K      0   824M      0   38ms      -   38ms      -      -      -  186us      -      -
data1       1.48T  85.5T  8.27K      0   846M      0   36ms      -   35ms      -      -      -    1ms      -      -
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  8.25K      0   843M      0   80ms      -   79ms      -      -      -  745us      -      -
data1       1.48T  85.5T  7.78K      0   795M      0   66ms      -   65ms      -      -      -  420us      -      -
data1       1.48T  85.5T  7.68K      0   786M      0   64ms      -   63ms      -   12us      -  497us      -      -
data1       1.48T  85.5T  7.59K      0   778M      0   69ms      -   67ms      -      -      -    1ms      -      -
data1       1.48T  85.5T  7.57K      0   773M      0   54ms      -   54ms      -      -      -    3us      -      -
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  9.22K      0   944M      0  224ms      -  224ms      -      -      -   16us      -      -
data1       1.48T  85.5T  8.63K      0   884M      0  283ms      -  283ms      -      -      -  317us      -      -
data1       1.48T  85.5T  9.12K      0   934M      0  215ms      -  215ms      -      -      -   80us      -      -
data1       1.48T  85.5T  9.25K      0   947M      0  242ms      -  241ms      -      -      -   65us      -      -
data1       1.48T  85.5T  7.25K      0   742M      0  257ms      -  257ms      -      -      -   16us      -      -

Run zpool iostat with sampling interval 100 sec :

# zpool iostat -y -l data1 100 5
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  8.05K      0   823M      0  385us      -  385us      -   22ns      -  241ns      -      -

Include any warning/errors/backtraces from the system logs

@ak100
Copy link
Author

ak100 commented Jul 9, 2018

I have to add that wait time histogram profile does not depend (much) on sampling period.
It looks like disk IO time peaks between 33-67 ms so the disk wait time =40 ms shown for sampling time =1 sec seems to be right for

zpool iostat -y -l <pool> 1

Histogram:

# zpool iostat -y data1 -w 5

data1        total_wait     disk_wait    sync_queue    async_queue
latency      read  write   read  write   read  write   read  write  scrub
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----
1ns             0      0      0      0      0      0      0      0      0
3ns             0      0      0      0      0      0      0      0      0
7ns             0      0      0      0      0      0      0      0      0
15ns            0      0      0      0      0      0      0      0      0
31ns            0      0      0      0      0      0      0      0      0
63ns            0      0      0      0      0      0      0      0      0
127ns           0      0      0      0      0      0      0      0      0
255ns           0      0      0      0      0      0      0      0      0
511ns           0      0      0      0      0      0      0      0      0
1us             0      0      0      0      0      0    548      0      0
2us             0      0      0      0      0      0  6.11K      0      0
4us             0      0      0      0      1      0  1.25K      0      0
8us             0      0      0      0      0      0     38      0      0
16us            0      0      0      0      0      0      9      0      0
32us            0      0      0      0      0      0      2      0      0
65us            0      0      0      0      0      0      0      0      0
131us           0      0      0      0      0      0      0      0      0
262us           0      0      0      0      0      0      0      0      0
524us         740      0    741      0      0      0      0      0      0
1ms           343      0    343      0      0      0      1      0      0
2ms           379      0    379      0      0      0      1      0      0
4ms           496      0    496      0      0      0      2      0      0
8ms           725      0    725      0      0      0      2      0      0
16ms        1.15K      0  1.15K      0      0      0      0      0      0
33ms        1.39K      0  1.39K      0      0      0      0      0      0
67ms        1.35K      0  1.35K      0      0      0      0      0      0
134ms        1018      0   1017      0      0      0      0      0      0
268ms         374      0    374      0      0      0      0      0      0
536ms          76      0     76      0      0      0      0      0      0
1s              3      0      3      0      0      0      0      0      0
2s              0      0      0      0      0      0      0      0      0
4s              0      0      0      0      0      0      0      0      0
8s              0      0      0      0      0      0      0      0      0
17s             0      0      0      0      0      0      0      0      0
34s             0      0      0      0      0      0      0      0      0
68s             0      0      0      0      0      0      0      0      0
137s            0      0      0      0      0      0      0      0      0
-------------------------------------------------------------------------
^C

@ak100 ak100 closed this as completed Jul 9, 2018
@ak100
Copy link
Author

ak100 commented Jul 9, 2018

Similarly, asyncq_read pending count depends on sampling time.

It can be my misunderstanding but how it is possible to have 40 pending async reads within 10 sec sampling period; 200 to 400 for 1 sec sampling; 2K to 3K during 100 ms sampling window?

# zpool iostat -y -q data1 10 5
              capacity     operations     bandwidth    syncq_read    syncq_write   asyncq_read  asyncq_write   scrubq_read
pool        alloc   free   read  write   read  write   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  8.17K      0   835M      0      0      0      0      0      0     39      0      0      0      0
data1       1.48T  85.5T  8.18K      0   837M      0      0      0      0      0      0     41      0      0      0      0
data1       1.48T  85.5T  7.87K      0   804M      0      0      0      0      0      0     35      0      0      0      0
data1       1.48T  85.5T  7.88K      0   806M      0      0      0      0      0      0     36      0      0      0      0
data1       1.48T  85.5T  7.94K      0   812M      0      0      0      0      0      0     39      0      0      0      0


# zpool iostat -y -q data1 1 10
              capacity     operations     bandwidth    syncq_read    syncq_write   asyncq_read  asyncq_write   scrubq_read
pool        alloc   free   read  write   read  write   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  8.26K      0   846M      0      0      0      0      0      0    293      0      0      0      0
data1       1.48T  85.5T  8.60K      0   880M      0      0      0      0      0      0    263      0      0      0      0
data1       1.48T  85.5T  8.66K      0   887M      0      0      0      0      0      0    416      0      0      0      0
data1       1.48T  85.5T  7.89K      0   808M      0      0      0      0      0      3    430      0      0      0      0
data1       1.48T  85.5T  7.71K      0   790M      0      0      0      0      0      0    280      0      0      0      0
data1       1.48T  85.5T  8.04K      0   823M      0      0      0      0      0      0    186      0      0      0      0
data1       1.48T  85.5T  8.75K      0   896M      0      0      0      0      0      0    147      0      0      0      0
data1       1.48T  85.5T  8.11K      0   830M      0      0      0      0      0      0    396      0      0      0      0
data1       1.48T  85.5T  7.85K      0   804M      0      0      0      0      0      0    341      0      0      0      0
data1       1.48T  85.5T  7.50K      0   768M      0      0      0      0      0      0    218      0      0      0      0

# zpool iostat -y -q data1 0.1 100 | (head -13 && tail)
              capacity     operations     bandwidth    syncq_read    syncq_write   asyncq_read  asyncq_write   scrubq_read
pool        alloc   free   read  write   read  write   pend  activ   pend  activ   pend  activ   pend  activ   pend  activ
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  9.74K      0   998M      0      0      9      0      0      0  2.68K      0      0      0      0
data1       1.48T  85.5T  7.72K      0   791M      0      0      9      0      0      0  3.76K      0      0      0      0
data1       1.48T  85.5T  7.75K      0   794M      0      0      9      0      0      0  3.02K      0      0      0      0
data1       1.48T  85.5T  7.76K      0   795M      0      0      9      0      0      0  2.73K      0      0      0      0
data1       1.48T  85.5T  8.26K      0   846M      0      0      9      0      0      0  2.71K      0      0      0      0
data1       1.48T  85.5T  8.90K      0   911M      0      0      9      0      0      0  3.53K      0      0      0      0
data1       1.48T  85.5T  8.36K      0   856M      0      0      9      0      0      0  2.66K      0      0      0      0
data1       1.48T  85.5T  6.29K      0   644M      0      0      9      0      0      0  1.92K      0      0      0      0
data1       1.48T  85.5T  7.49K      0   767M      0      0      9      0      0      0  1.62K      0      0      0      0
data1       1.48T  85.5T  7.51K      0   769M      0      0      9      0      0      0  1.96K      0      0      0      0
data1       1.48T  85.5T  8.16K      0   836M      0      0      9      0      0      0  1.75K      0      0      0      0
data1       1.48T  85.5T  7.49K      0   767M      0      0      9      0      0      0  2.21K      0      0      0      0
data1       1.48T  85.5T  8.14K      0   834M      0      0      9      0      0      0  2.03K      0      0      0      0
data1       1.48T  85.5T  9.40K      0   963M      0      0      9      0      0      0  1.89K      0      0      0      0
data1       1.48T  85.5T  8.39K      0   859M      0      0      9      0      0      0  2.38K      0      0      0      0
data1       1.48T  85.5T  8.33K      0   852M      0      0      9      0      0      0  1.63K      0      0      0      0
data1       1.48T  85.5T  8.58K      0   879M      0      0      9      0      0      0  1.82K      0      0      0      0
data1       1.48T  85.5T  8.26K      0   847M      0      0      9      0      0      0  2.58K      0      0      0      0
data1       1.48T  85.5T  8.68K      0   889M      0      0      9      0      0      0  1.17K      0      0      0      0
data1       1.48T  85.5T  9.56K      0   979M      0      0      9      0      0      0  2.82K      0      0      0      0

@ak100 ak100 reopened this Jul 10, 2018
@richardelling
Copy link
Contributor

I don't see a bug here. But know that due to the transactional nature of ZFS, continuous measurements sampled at rates faster than the commit interval are of questionable use. I suggest moving the conversation to the email list as a bug tracker is not the appropriate forum.

@ak100
Copy link
Author

ak100 commented Jul 10, 2018

I will move discussion to the mail list but let me put cross check computation here to illustrate the point. "Sampling faster than the commit interval" is not an issue here but I'll use higher interval durations for the illustration.

The histogram of wait times above was taken with 5 second interval. The distribution shape and position is about the same when it was taken over the longer (or shorter) interval ; and it does not change by factor two for sure when I change sampling rate by factor two.

average_wait_time = sum( N[i] * t[i] ) / sum ( N[i] ),
where
N[i] - number of events per bin (content of histogram bin)
t[i] - time at the center of the bin, like (33ms+16ms)/2 = 24.5ms

For the histogram below:
sum( N[i] * t[i] ) = (524+262)/2/1000* 741 + (1+0.524)/2 * 343 + 1.5 * 379 + 3*496 + 12 * 1150 + (33+16)/2 * 1390 + (67+33)/2 * 1350 + (134+67)/2 * 1017 + (268+134)/2 * 374 + (536+268)/2 * 76 + (1000+536/2) * 3 = 329702.579

sum ( N[i] ) = 741 + 343 + 379 +496 + 1150 + 1390 + 1350 + 1017 + 374 + 76 + 3
= 7319

wait_time = 329702.579/7319 = 45.0 ms.

From the shape of distribution below it is apparent 45 ms is the right number:

# zpool iostat -y data1 -w 5

data1        total_wait     disk_wait    sync_queue    async_queue
latency      read  write   read  write   read  write   read  write  scrub
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----

262us           0      0      0      0      0      0      0      0      0
524us         740      0    741      0      0      0      0      0      0
1ms           343      0    343      0      0      0      1      0      0
2ms           379      0    379      0      0      0      1      0      0
4ms           496      0    496      0      0      0      2      0      0
8ms           725      0    725      0      0      0      2      0      0
16ms        1.15K      0  1.15K      0      0      0      0      0      0
33ms        1.39K      0  1.39K      0      0      0      0      0      0
67ms        1.35K      0  1.35K      0      0      0      0      0      0
134ms        1018      0   1017      0      0      0      0      0      0
268ms         374      0    374      0      0      0      0      0      0
536ms          76      0     76      0      0      0      0      0      0
1s              3      0      3      0      0      0      0      0      0
2s              0      0      0      0      0      0      0      0      0

-------------------------------------------------------------------------

The third entry in the original posting taken with 1 sec interval gives the measurement with numerically correct value 40 ms ~= 45 ms.

iostat -y -l data1  1  5
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data1       1.48T  85.5T  7.94K      0   812M      0   32ms      -   32ms      -      -      -    1us      -      -
data1       1.48T  85.5T  7.44K      0   761M      0   43ms      -   43ms      -    2us      -   96us      -      -
data1       1.48T  85.5T  7.48K      0   766M      0   41ms      -   41ms      -      -      -   10us      -      -
data1       1.48T  85.5T  8.07K      0   824M      0   38ms      -   38ms      -      -      -  186us      -      -
data1       1.48T  85.5T  8.27K      0   846M      0   36ms      -   35ms      -      -      -    1ms      -      -

The measured "average" wait time 8ms taken with 5 sec interval is not consistent with 45 ms disk wait time we computed from the histogram.

The average disk wait time taken over 100 sec >> commit interval shows disk wait time average 385us << 45 ms. This is not right.

Here is illustration that the shape of the density distribution of disk wait time over the time does not depend on sampling interval :
(Someone running different kind of test load with default zfs IO record = 128KB so it can be more disk head trashing and wait time ~> 45 ms compared to samples above with 1MB IO).

# zpool iostat -y -w data1 50

data1        total_wait     disk_wait    sync_queue    async_queue
latency      read  write   read  write   read  write   read  write  scrub
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----
... cut...
262us           0      0      0      0     13      0      0      0      0
524us           0      0     15      0     20      0      0      0      0
1ms             0      0      6      0     30      0      0      0      0
2ms             3      0      6      0     59      0      0      0      0
4ms            98      0    103      0    116      0      0      0      0
8ms           520      0    564      0    176      0      0      0      0
16ms        1.46K      0  1.47K      0    113      0      0      0      0
33ms        1.79K      0  1.76K      0     19      0      0      0      0
67ms        1.17K      0  1.13K      0      0      0      0      0      0
134ms         368      0    356      0      0      0      0      0      0
268ms          39      0     38      0      0      0      0      0      0
536ms           2      0      2      0      0      0      0      0      0
1s              0      0      0      0      0      0      0      0      0
-------------------------------------------------------------------------

# zpool iostat -y -w data1 10

data1        total_wait     disk_wait    sync_queue    async_queue
latency      read  write   read  write   read  write   read  write  scrub
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----
.... cut ...
524us           0      0     13      0     23      0      0      0      0
1ms             0      0      5      0     32      0      0      0      0
2ms             1      0      6      0     57      0      0      0      0
4ms            95      0     99      0    115      0      0      0      0
8ms           576      0    616      0    158      0      0      0      0
16ms        1.58K      0  1.59K      0    105      0      0      0      0
33ms        1.81K      0  1.78K      0     24      0      0      0      0
67ms        1.13K      0  1.10K      0      0      0      0      0      0
134ms         301      0    288      0      0      0      0      0      0
268ms          42      0     41      0      0      0      0      0      0
536ms           4      0      4      0      0      0      0      0      0
1s              0      0      0      0      0      0      0      0      0

# zpool iostat -y -w data1 5

data1        total_wait     disk_wait    sync_queue    async_queue
latency      read  write   read  write   read  write   read  write  scrub
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----
.... cut ...
262us           0      0      0      0     17      0      0      0      0
524us           0      0     16      0     18      0      0      0      0
1ms             0      0      4      0     28      0      0      0      0
2ms             3      0      8      0     55      0      0      0      0
4ms           105      0    110      0    113      0      0      0      0
8ms           543      0    585      0    151      0      0      0      0
16ms        1.52K      0  1.52K      0    104      0      0      0      0
33ms        1.82K      0  1.78K      0     20      0      0      0      0
67ms        1.15K      0  1.13K      0      0      0      0      0      0
134ms         335      0    326      0      0      0      0      0      0
268ms          41      0     40      0      0      0      0      0      0
536ms           5      0      5      0      0      0      0      0      0
1s              0      0      0      0      0      0      0      0      0

Overall, it seems the wait time computation has a bug and/or does not do proper normalization.

@ak100 ak100 closed this as completed Jul 10, 2018
@ak100 ak100 reopened this Jul 10, 2018
@GregorKopka
Copy link
Contributor

GregorKopka commented Jul 11, 2018

The bug (with iostats -l) is a confusion of the units used:
Bandwidth and iops are average per second while *_wait are average per request (according to man zpool).

When calculating the first two it makes sense to do x/interval_duration (x being the increase in total bytes or number of requests over the duration of the interval, interval_duration in seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies) is wrong as there is no interval_duration component in the values (these are time/requests to get to average_time/request).

Currently the only correct continuous *_wait figures from zpool iostat -l are with duration=1 (as then the wrong math is a x/1 nop).

@tonyhutter
Copy link
Contributor

IIRC, the calculation takes the diff of the latency histograms before and after the interval and takes the average of that. It's not super accurate at small polling intervals or if there's not a lot of IO. I think we poll at 10 or 20 seconds in our sampling scripts.

@GregorKopka
Copy link
Contributor

Arn't there kstat_io and kstat_timer structures (starting at zero on import, never reset) being maintained per pool and vdev already?

With these the easiest would be to make a local copy of them (or the interesting parts) at the beginning of each interval, while at the same time create a diff vs. the previous state to get the increments since the last cycle (with the local copy being initialized to zero this would automatically give 'since boot' figures) and output the result, rinse&repeat. Should be way cheaper than having to sift through diffs of the histograms and it would be accurate even for small polling intervals...

@ak100
Copy link
Author

ak100 commented Jul 11, 2018

@GregorKopka : I agree with your analysis.

I found old machine with zfs on solaris; displayed disk latency value does not depend on sampling interval and stays 17ms for default; and 10 sec and 100 sec:

# zpool iostat -v large-4-2 5

                              capacity     operations    bandwidth      latency
pool                       alloc   free   read  write   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----  -----  -----
large-4-2                  3.07T  40.4T      0  8.08K  11.5K  1016M  2083.79  2206.24
  raidz2                    787G  10.1T      0  2.11K      0   265M   0.00  2455.44
    c0t5000CCA01B420EC9d0      -      -      0    551      0  66.1M   0.00  17.51
    c0t5000CCA01B440BBDd0      -      -      0    552      0  66.1M   0.00  17.10
    c0t5000CCA01B4914DDd0      -      -      0    549  2.88K  65.7M  24.66  17.12
    c0t5000CCA01B491509d0      -      -      0    550  2.88K  65.9M  46.79  17.15
    c0t5000CCA01B492491d0      -      -      0    550      0  65.9M   0.00  17.12
    c0t5000CCA01B4924DDd0      -      -      0    546      0  65.3M   0.00  17.39

Back to linux machine.
/usr/bin/iostat wait times (await,r_await,w_await) do not depend on interval; and
I expect zpool iostat behave similarly AND display values consistent with /usr/bin/iostat.

Here is lengthy output taken for /usr/bin/iostat and zpool iostat.
/usr/bin/iostat wait times stay the same for 1 sec, 10 sec, 100 sec.
zpool iostat has similar delay for sampling interval 1 sec; but shows wait time T/10 or T/100 for 10 sec and 100 interval.

Drive reads:

# for x in {1..100} ; do (for f in {0..7} ; do dd bs=1M if=/data3/test-1M/dd/10GB of=/dev/null & done ; wait); done

Watch stats, compare 'disk_wait read' and 'r_await'
I skipped few cycles in each output before copy/paste.

1 second:

zpool iostat -y data3 -L -lv 1

... skip ...
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data3       1.47T  85.5T  13.7K      0  1.37G      0   62ms      -   55ms      -    4us      -    7ms      -      -
  raidz2    1.47T  85.5T  13.7K      0  1.37G      0   62ms      -   55ms      -    4us      -    7ms      -      -
    sdv         -      -  1.03K      0   106M      0   70ms      -   64ms      -      -      -    6ms      -      -
    sdah        -      -  1.02K      0   104M      0   50ms      -   47ms      -      -      -    3ms      -      -
    sdw         -      -  1.37K      0   140M      0   66ms      -   53ms      -    6us      -    8ms      -      -
    sdai        -      -  1.02K      0   104M      0   52ms      -   48ms      -      -      -    3ms      -      -
    sdaf        -      -  1.04K      0   107M      0   84ms      -   79ms      -      -      -    6ms      -      -
    sdag        -      -  1.40K      0   142M      0   65ms      -   52ms      -      -      -   10ms      -      -
    sdc         -      -  1.02K      0   105M      0   68ms      -   57ms      -      -      -    8ms      -      -
    sdal        -      -  1.00K      0   103M      0   45ms      -   45ms      -      -      -  654us      -      -
    sdy         -      -  1.39K      0   142M      0   61ms      -   51ms      -    3us      -   13ms      -      -
    sdp         -      -  1.03K      0   106M      0   64ms      -   59ms      -      -      -    5ms      -      -
    sdr         -      -  1.03K      0   106M      0   54ms      -   54ms      -      -      -    2ms      -      -
    sds         -      -  1.38K      0   141M      0   66ms      -   58ms      -      -      -   10ms      -      -
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----

... skip ...

iostat -y -x -m -d /dev/disk/by-vdev/e66s?? 1

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdc             226.00     0.00  830.00    0.00   105.67     0.00   260.74    32.94   39.72   39.72    0.00   0.94  77.90
sdp             273.00     0.00  783.00    0.00   105.71     0.00   276.49    43.15   55.10   55.10    0.00   1.17  91.40
sdr             258.00     0.00  798.00    0.00   105.71     0.00   271.29    37.24   46.66   46.66    0.00   1.06  84.80
sdw             531.00     0.00  904.00    0.00   142.72     0.00   323.33    44.64   49.80   49.80    0.00   1.10  99.00
sdah            245.00     0.00  828.00    0.00   107.40     0.00   265.64    37.77   45.68   45.68    0.00   1.03  85.00
sdai            262.00     0.00  788.00    0.00   105.14     0.00   273.27    44.77   56.79   56.79    0.00   1.12  88.50
sdal            220.00     0.00  841.00    0.00   106.18     0.00   258.56    32.35   38.54   38.54    0.00   0.95  80.10
sdt               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdaf            233.00     0.00  834.00    0.00   106.85     0.00   262.39    34.12   40.95   40.95    0.00   1.00  83.20
sds             430.00     0.00  976.00    0.00   139.89     0.00   293.54    33.46   34.28   34.28    0.00   0.87  85.40
sdag            440.00     0.00  986.00    0.00   141.82     0.00   294.58    36.69   37.29   37.29    0.00   0.85  83.90
sdv             241.00     0.00  844.00    0.00   108.80     0.00   264.02    34.45   41.13   41.13    0.00   0.97  81.70
sdu               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdy             471.00     0.00  959.00    0.00   142.27     0.00   303.83    36.32   37.98   37.98    0.00   0.87  83.70

10 second

# zpool iostat -y data3 -L -lv 10
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data3       1.47T  85.5T  13.2K      0  1.32G      0    6ms      -    4ms      -  133ns      -    1ms      -      -
  raidz2    1.47T  85.5T  13.2K      0  1.32G      0    6ms      -    4ms      -  133ns      -    1ms      -      -
    sdv         -      -   1012      0   101M      0    6ms      -    5ms      -  115ns      -    1ms      -      -
    sdah        -      -   1014      0   102M      0    6ms      -    5ms      -  105ns      -    1ms      -      -
    sdw         -      -  1.32K      0   135M      0    7ms      -    4ms      -  174ns      -    2ms      -      -
    sdai        -      -   1018      0   102M      0    5ms      -    4ms      -  115ns      -    1ms      -      -
    sdaf        -      -   1017      0   102M      0    6ms      -    5ms      -  115ns      -    1ms      -      -
    sdag        -      -  1.32K      0   134M      0    6ms      -    4ms      -  191ns      -    1ms      -      -
    sdc         -      -   1011      0   101M      0    6ms      -    5ms      -   95ns      -  981us      -      -
    sdal        -      -   1011      0   101M      0    5ms      -    4ms      -   76ns      -  836us      -      -
    sdy         -      -  1.32K      0   135M      0    6ms      -    4ms      -  161ns      -    1ms      -      -
    sdp         -      -   1015      0   102M      0    6ms      -    5ms      -  115ns      -  968us      -      -
    sdr         -      -   1011      0   101M      0    5ms      -    4ms      -  127ns      -  866us      -      -
    sds         -      -  1.32K      0   135M      0    5ms      -    4ms      -  124ns      -    1ms      -      -
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----

# iostat -y -x -m -d /dev/disk/by-vdev/e66s?? 10

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdc             246.10     0.00  777.70    0.00   102.52     0.00   269.99    38.18   49.07   49.07    0.00   1.12  87.34
sdp             241.60     0.00  783.10    0.00   102.60     0.00   268.33    37.63   48.02   48.02    0.00   1.12  87.77
sdr             233.10     0.00  792.50    0.00   102.77     0.00   265.59    36.61   46.23   46.23    0.00   1.09  86.13
sdw             450.80     0.00  919.20    0.00   136.42     0.00   303.94    42.05   45.93   45.93    0.00   0.99  91.43
sdah            240.50     0.00  782.80    0.00   102.63     0.00   268.50    37.98   48.48   48.48    0.00   1.13  88.13
sdai            242.40     0.00  785.40    0.00   102.98     0.00   268.54    37.87   48.23   48.23    0.00   1.11  86.85
sdal            234.40     0.00  789.90    0.00   102.60     0.00   266.02    34.48   43.64   43.64    0.00   1.07  84.32
sdt               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdaf            252.40     0.00  775.50    0.00   103.07     0.00   272.20    40.13   51.77   51.77    0.00   1.15  89.21
sds             435.20     0.00  931.00    0.00   136.03     0.00   299.24    38.31   41.02   41.02    0.00   0.95  88.71
sdag            433.70     0.00  936.00    0.00   136.48     0.00   298.63    39.64   42.49   42.49    0.00   0.95  88.76
sdv             251.60     0.00  770.70    0.00   102.40     0.00   272.10    39.82   51.61   51.61    0.00   1.16  89.59
sdu               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdy             431.80     0.00  931.10    0.00   135.57     0.00   298.19    38.87   41.60   41.60    0.00   0.94  87.52

100 second

# iostat -y -x -m -d /dev/disk/by-vdev/e66s?? 100
Linux 2.6.32-696.23.1.el6.x86_64 (stkendca1818.fnal.gov) 	07/11/2018 	_x86_64_	(40 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdc             232.45     0.00  767.39    0.00   100.06     0.00   267.04    37.20   48.54   48.54    0.00   1.15  88.35
sdp             229.70     0.00  771.94    0.00   100.25     0.00   265.97    37.09   48.05   48.05    0.00   1.14  87.70
sdr             218.92     0.00  782.75    0.00   100.25     0.00   262.30    34.42   43.98   43.98    0.00   1.09  85.45
sdw             426.55     0.00  908.58    0.00   132.80     0.00   299.33    40.04   44.14   44.14    0.00   0.99  89.94
sdah            224.75     0.00  776.26    0.00   100.16     0.00   264.26    35.02   45.11   45.11    0.00   1.11  86.24
sdai            223.55     0.00  777.91    0.00   100.25     0.00   263.93    35.26   45.35   45.35    0.00   1.10  85.80
sdal            214.33     0.00  785.32    0.00   100.04     0.00   260.88    32.50   41.39   41.39    0.00   1.06  83.50
sdt               0.00     0.00    0.06    0.00     0.00     0.00     8.00     0.00   10.50   10.50    0.00  10.50   0.06
sdaf            233.20     0.00  768.25    0.00   100.25     0.00   267.25    37.90   49.37   49.37    0.00   1.15  88.72
sds             415.63     0.00  918.91    0.00   132.74     0.00   295.85    38.97   42.47   42.47    0.00   0.97  89.31
sdag            408.36     0.00  926.01    0.00   132.71     0.00   293.50    37.90   40.97   40.97    0.00   0.95  88.23
sdv             231.30     0.00  769.87    0.00   100.19     0.00   266.52    37.16   48.29   48.29    0.00   1.14  87.99
sdu               0.00     0.00    0.06    0.00     0.00     0.00     8.00     0.00    6.17    6.17    0.00   6.17   0.04
sdy             402.11     0.00  932.63    0.00   132.74     0.00   291.49    36.96   39.69   39.69    0.00   0.93  87.19


# zpool iostat -y data3 -L -lv 100
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
data3       1.47T  85.5T  13.0K      0  1.30G      0  603us      -  482us      -   11ns      -  121us      -      -
  raidz2    1.47T  85.5T  13.0K      0  1.30G      0  603us      -  482us      -   11ns      -  121us      -      -
    sdv         -      -   1001      0   100M      0  625us      -  526us      -   10ns      -   97us      -      -
    sdah        -      -   1001      0   100M      0  574us      -  486us      -   11ns      -   86us      -      -
    sdw         -      -  1.30K      0   133M      0  653us      -  471us      -   12ns      -  176us      -      -
    sdai        -      -   1002      0   100M      0  580us      -  488us      -   10ns      -   90us      -      -
    sdaf        -      -   1002      0   100M      0  647us      -  548us      -   11ns      -   99us      -      -
    sdag        -      -  1.30K      0   133M      0  602us      -  440us      -   13ns      -  165us      -      -
    sdc         -      -   1000      0   100M      0  633us      -  539us      -   11ns      -   99us      -      -
    sdal        -      -   1000      0   100M      0  520us      -  444us      -   10ns      -   80us      -      -
    sdy         -      -  1.30K      0   133M      0  582us      -  428us      -   12ns      -  157us      -      -
    sdp         -      -   1001      0   100M      0  620us      -  526us      -   10ns      -   93us      -      -
    sdr         -      -   1001      0   100M      0  560us      -  474us      -   13ns      -   87us      -      -
    sds         -      -  1.30K      0   133M      0  626us      -  458us      -   11ns      -  167us      -      -
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----

@shodanshok
Copy link
Contributor

I have a very similar (if not the same) problem here...

System information

Type Version/Name
Distribution Name CentOS
Distribution Version 7.5.1804
Linux Kernel 3.10.0-862.11.6.el7.x86_64
Architecture x86_64
ZFS Version 0.7.11-1
SPL Version 0.7.11-1

Describe the problem you're observing

zpool iostat -l reports unrealistically low latency values

Describe how to reproduce the problem

[root@localhost ~]# zpool iostat -l
              capacity     operations     bandwidth    total_wait     disk_wait    syncq_wait    asyncq_wait  scrub
pool        alloc   free   read  write   read  write   read  write   read  write   read  write   read  write   wait
----------  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----  -----
tank         983G  2.67T      4     91   267K  3.24M  881ns   56ns  553ns   16ns   42ns      -    1us   50ns  175ns

As you can see, all latency number are expressed in ns - but this is a SATA based machine, with 4x 7200 RPM HDDs and 2x Samsung 850 Evo SSDs as L2ARC/SLOG. In other words, we have absolutely zero chances to really have these low latencies.

Rather, what zpool iostat -l seems to do is to include inactive/dead time into accounting - ie: if having a 15ms instantaneous latency value followed by 5s of inactivity, the latency number is the averate between 1x 15ms and ~5x 0ms. You can rightfully argue that this is the very meaning of average. However, I feel that excluding inactive time is key to get meaningful latency numbers.

Using a 1s sampling interval, as suggested above, seems to produce reasonable numbers. Moreover, multiplying zpool iostat -l latency output by uptime seconds (~86400 in my case) again seems to produce reasonable values, in line with the 1s sampling interval.

@GregorKopka
Copy link
Contributor

@shodanshok As you're after latency: the -w switch should give what you look for.

@shodanshok
Copy link
Contributor

shodanshok commented Sep 21, 2018

Yes, the latency histogram is working properly. Thanks.

GregorKopka pushed a commit to GregorKopka/zfs that referenced this issue Sep 22, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

As of this bug currently the only correct continuous *_wait figures for
both latencies and queue depths from 'zpool iostat -l' are with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: openzfs#7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>
GregorKopka pushed a commit to GregorKopka/zfs that referenced this issue Sep 22, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

As of this bug currently the only correct continuous *_wait figures for
both latencies and queue depths from 'zpool iostat -l' are with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: openzfs#7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>
GregorKopka pushed a commit to GregorKopka/zfs that referenced this issue Sep 22, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

As of this bug currently the only correct continuous *_wait figures for
both latencies and queue depths from 'zpool iostat -l' are with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: openzfs#7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>
GregorKopka pushed a commit to GregorKopka/zfs that referenced this issue Sep 22, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for
both latencies and queue depths from 'zpool iostat -l' are with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: openzfs#7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>
GregorKopka pushed a commit to GregorKopka/zfs that referenced this issue Sep 22, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.
Closes: openzfs#7694

Signed-off-by: Gregor Kopka <gregor@kopka.net>
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Oct 31, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes openzfs#7945 
Closes openzfs#7694
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Nov 5, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes openzfs#7945 
Closes openzfs#7694
tonyhutter pushed a commit that referenced this issue Nov 13, 2018
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes #7945
Closes #7694
GregorKopka added a commit to GregorKopka/zfs that referenced this issue Jan 7, 2019
Bandwidth and iops are average per second while *_wait are averages
per request for latency or, for queue depths, an instantaneous
measurement at the end of an interval (according to man zpool).

When calculating the first two it makes sense to do
x/interval_duration (x being the increase in total bytes or number of
requests over the duration of the interval, interval_duration in
seconds) to 'scale' from amount/interval_duration to amount/second.

But applying the same math for the latter (*_wait latencies/queue) is
wrong as there is no interval_duration component in the values (these
are time/requests to get to average_time/request or already an
absulute number).

This bug leads to the only correct continuous *_wait figures for both
latencies and queue depths from 'zpool iostat -l/q' being with
duration=1 as then the wrong math cancels itself (x/1 is a nop).

This removes temporal scaling from latency and queue depth figures.

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Gregor Kopka <gregor@kopka.net>
Closes openzfs#7945 
Closes openzfs#7694
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants