Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit l2arc header size #1420

Closed
douardda opened this issue Apr 22, 2013 · 21 comments
Closed

Limit l2arc header size #1420

douardda opened this issue Apr 22, 2013 · 21 comments
Labels
Type: Documentation Indicates a requested change to the documentation Type: Performance Performance improvement or performance problem

Comments

@douardda
Copy link

Hi,

i have serious performance problems with my ZFS system. For the record it's a Debian squeeze+backports (3.2.41-2~bpo60+1 debian kernel) running on a Dell PE2950 (2 Xeon L5420@2.5GHz, 16Go), driving a storage bay consisting of 18 spinning drives (SAS, 7200) and 4 SSDs (2 MLC, 2 SLC for logs and cache).
The HBA is a LSI SAS9200-8E

My problem is that after a fresh reboot, the system behaves quite normally, but as write operations occurs on ZFS volumes, performances degrades down to a state which makes it almost unusable (any zfs command take more than a minute to return, IO perf on zfs and zvols are near 0, zfs kernel threads spend most of their time waiting for mutexes, etc.)

There a detailed explanation on the mailinglist:

https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/6-2sqov3usM

@behlendorf
Copy link
Contributor

It sure looks like we're having trouble allocating memory. Could you post in the contents of the /proc/spl/kstat/zfs/arcstats and /proc/spl/kstat/zfs/dmu_tx proc files. They contain some counters which might help shed some light on things.

@douardda
Copy link
Author

Sure, here they are:

[root@centaurus zfs-0.6.1]$ cat /proc/spl/kstat/zfs/dmu_tx
3 1 0x01 12 576 9909230977 529581521989696
name type data
dmu_tx_assigned 4 92571627
dmu_tx_delay 4 162299
dmu_tx_error 4 0
dmu_tx_suspended 4 0
dmu_tx_group 4 1967
dmu_tx_how 4 0
dmu_tx_memory_reserve 4 0
dmu_tx_memory_reclaim 4 0
dmu_tx_memory_inflight 4 0
dmu_tx_dirty_throttle 4 0
dmu_tx_write_limit 4 287149
dmu_tx_quota 4 0

and

[root@centaurus zfs-0.6.1]$ cat /proc/spl/kstat/zfs/arcstats
4 1 0x01 80 3840 9910883074 529467391427367
name type data
hits 4 702720877
misses 4 624658714
demand_data_hits 4 160756193
demand_data_misses 4 91162978
demand_metadata_hits 4 524895939
demand_metadata_misses 4 42359988
prefetch_data_hits 4 17023324
prefetch_data_misses 4 53160004
prefetch_metadata_hits 4 45421
prefetch_metadata_misses 4 437975744
mru_hits 4 83565562
mru_ghost_hits 4 491185289
mfu_hits 4 607625776
mfu_ghost_hits 4 9864934
deleted 4 298441122
recycle_miss 4 668631822
mutex_miss 4 23908139
evict_skip 4 2100311734891
evict_l2_cached 4 1297020539392
evict_l2_eligible 4 1777496315904
evict_l2_ineligible 4 2015950931968
hash_elements 4 15350755
hash_elements_max 4 16650916
hash_collisions 4 375109091
hash_chains 4 524288
hash_chain_max 4 63
p 4 4026507264
c 4 4294967296
c_min 4 570425344
c_max 4 4294967296
size 4 4326200288
hdr_size 4 187419904
data_size 4 22618112
other_size 4 1468672
anon_size 4 3590144
anon_evict_data 4 0
anon_evict_metadata 4 0
mru_size 4 16930816
mru_evict_data 4 0
mru_evict_metadata 4 12582912
mru_ghost_size 4 1778292736
mru_ghost_evict_data 4 1193345024
mru_ghost_evict_metadata 4 584947712
mfu_size 4 2097152
mfu_evict_data 4 0
mfu_evict_metadata 4 163840
mfu_ghost_size 4 192223232
mfu_ghost_evict_data 4 181927936
mfu_ghost_evict_metadata 4 10295296
l2_hits 4 63423175
l2_misses 4 561235520
l2_feeds 4 595328
l2_rw_clash 4 28275
l2_read_bytes 4 734381053440
l2_write_bytes 4 820946996736
l2_writes_sent 4 372902
l2_writes_done 4 372902
l2_writes_error 4 0
l2_writes_hdr_miss 4 31574
l2_evict_lock_retry 4 10879
l2_evict_reading 4 0
l2_free_on_write 4 2969867
l2_abort_lowmem 4 2439
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 125340989440
l2_hdr_size 4 4356734400
memory_throttle_count 4 0
duplicate_buffers 4 0
duplicate_buffers_size 4 0
duplicate_reads 4 20
memory_direct_count 4 1129954
memory_indirect_count 4 3014298
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 47902630
arc_meta_used 4 4323021792
arc_meta_limit 4 1073741824
arc_meta_max 4 4582380880

@douardda
Copy link
Author

One thing that looks very odd to me (as I stated on the ML discussion), is the fact my mirrored log devices do never see any IO (as long as I can tell having watched them for a while today). No idea whether it might related, however...

@douardda
Copy link
Author

Oh one more thing: I cannot remove the mirorred log from the zpool. The "zpool remove" command hang for a while (like any other zfs/zpool command, but significantly longer), then it returns without any error message (return code is 0); but does nothing.

Running it in a strace, it remains stuck for a while (several minutes I'd say) on "ioctl(3, 0x5a0c, 0x7fff4012acd0)"

It's stack is then:

[root@centaurus ~]$ cat /proc/28462/stack
[] cv_wait_common+0xcd/0x15c [spl]
[] autoremove_wake_function+0x0/0x2a
[] txg_wait_synced+0x134/0x156 [zfs]
[] zil_commit+0x53b/0x5a0 [zfs]
[] zil_suspend+0x75/0xa4 [zfs]
[] zil_vdev_offline+0x32/0x5c [zfs]
[] dmu_objset_find_spa+0x2d8/0x2f0 [zfs]
[] findfunc+0x0/0x11 [zfs]
[] dmu_objset_find_spa+0x111/0x2f0 [zfs]
[] findfunc+0x0/0x11 [zfs]
[] dmu_objset_find_spa+0x111/0x2f0 [zfs]
[] findfunc+0x0/0x11 [zfs]
[] dmu_objset_find_spa+0x111/0x2f0 [zfs]
[] findfunc+0x0/0x11 [zfs]
[] dmu_objset_find_spa+0x111/0x2f0 [zfs]
[] findfunc+0x0/0x11 [zfs]
[] dmu_objset_find+0x24/0x29 [zfs]
[] zil_vdev_offline+0x0/0x5c [zfs]
[] spa_offline_log+0x23/0x42 [zfs]
[] spa_vdev_remove+0x1b6/0x325 [zfs]
[] zfs_ioc_vdev_remove+0x30/0x4f [zfs]
[] zfsdev_ioctl+0x114/0x16c [zfs]
[] do_vfs_ioctl+0x464/0x4b1
[] ptrace_notify+0x45/0x5f
[] sys_ioctl+0x4b/0x70
[] tracesys+0xd9/0xde
[] 0xffffffffffffffff

@behlendorf
Copy link
Contributor

@douardda I believe I see what's going on here. You've stumbled in to an L2ARC memory management issue which really should be better documented on the FAQ.

What's happening is that virtually all of your 4GB of ARC space is being consumed managing the 125GB of data in the L2ARC. This means there's basically no memory available for anything else which is why your system is struggling.

To explain a little more, when a data buffer gets removed from the primary ARC cache and migrated to the L2ARC a reference to the L2ARC buffer must be left if memory. Depending on how large your L2ARC device is and what your default block size it can take a significant amount of memory to manage these headers. This can get particularly bad for ZVOLs because they have a small 8k block size vs 128k for a file system. This means the ARCs memory requirements for L2ARC headers increase by 16x.

You can check for this in the l2_hdr_size field of the arcstats if you know what you're looking for. There are really only two ways to handle this at the moment.

  1. Add additional memory to your system so the ARC is large enough to manage your entire L2ARC device.
  2. Manually partition your L2ARC device so it's smaller.

Arguably ZFS should internally limit its L2ARC usage to prevent this pathological behavior and that's something we'll want to look in to. The upstream code also suffers from this issue but it's somewhat hidden because the vendors will carefully size the ARC and L2ARC to avoid this case.

@jcristau
Copy link

Oh wow. Sorry for the formatting in the previous comment, seems like github fucks up mail replies. Trying again from the web form:

At least for the short term, we've tried 2), so we now have two 10G
devices as L2ARC (instead of 2 * 120G).

Things seem better as far as performance; /proc/spl/kmem/slab indicates
that about 10G of vmalloc space is used by zio_data_buf_131072 and
zio_data_buf_8192 slabs, l2_hdr_size is down to ~230M in
/proc/spl/kstat/zfs/arcstats.

Thanks for the help.

@douardda
Copy link
Author

@behlendorf hi, thanks again to point us on some solutions.

Here is the situation. As @jcristau stated above, we've manage to get back to an acceptable situation by (drastically) reducing the L2ARC devices. We also managed to unload then reload the zfs/spl modules so we removed the zfs_arc_max kernel parameter.

So we have now a basically working zfs setup again. But (ghee), we still have performance issues. When I first set this zfs setup, I did a few benchmarks. A simple "dbench -s 10" could reach 45Mb/s before adding slog and cache, and I could get almost 150Mb/s after adding the SSD slogs and the caches devices (on a zfs filesystem, not a zvol).

Now (with no other IO on the ZFS pool than the dbench), I get a poor 7 to 10Mb/s and a dd of a quite big zvol (215G) did complete (hurrah) but at a mean rate of 21.0 MB/s (ghee).

Once again, I never see any activity on the slog devices; is this "normal"? Can it be a symptom related to my poor performances here?

@behlendorf
Copy link
Contributor

If you're experiencing much better performance with empty zvols versus filled zvols, you're definitely hitting #361. You'll notice lots of read activity if this is the case, it's a known long standing issue which hasn't yet been addressed.

@douardda
Copy link
Author

Some news on our ZFS setup. Since we have lowered a lot the cache devices size, the situation is mostly stable and under control. But it remains quite easy to put ZFS under memory pressure.

The first way to do so is to export a ZFS filesystem with NFS, put many (15 millions) small files in this filesystem (I know, this is not reasonable). Then, a simple "find" (on another computer on which the NFS volume is mounted) kills the zfs setup (free memory drops to 0 and the arc_adapt and a few more zfs processes like spl_kmem_cache then spend their time trying to move pages or so: there are then a huge amount of read IOs on the disks, and the whole zfs is very sluggish; even a "zpool -h" takes ages to return).

The second way to put it under pressure is to run an fio test using the following config file:

[global]
filename=/dev/zvol/data/bench/toto
bs=4k
thread=1
time_based=1
runtime=60
invalidate=1 
direct=1
iodepth=32
ioengine=libaio 
stonewall=1

[direct-read]
rw=read

[direct-rendread]
rw=randread

[direct-write]
rw=write

[direct-randwrite]
rw=randwrite

During the first 2 phases of the test (read and randread), everything is fine. Then, during the first "write" test, everything starts ok but the consumed memory starts to increase. As expected, when zfs runs out of "free" memory, performances drop down to 0, arc_adapt etc.

When the system is this "under pressure" state, the only to make it return back to a normal state (besides waiting almost forever), is to remove the cache devices from the pool. I can then reinster them, and then everything is back to a normal behaviour.

Just to illustrate, the dstat of the zvol used in the fio test:

-dsk/zd704- --io/zd704-
 read  writ| read  writ
 0   177M|   0  45.4k
 0    62M|   0  15.8k
 0    68M|   0  17.2k
 0    54M|   0  13.8k
 0  3200k|   0   797 
 0    17M|   0  4285 
 0    80M|   0  20.5k
 0  2572k|   0   642 
 0    17M|   0  4363 
 0    67M|   0  17.1k
 0   348k|   0  89.0 
 0  1996k|   0   499 
 0    19M|   0  4791 
 0    91M|   0  23.2k
 0    94M|   0  24.2k
 0    74M|   0  18.7k
 0    56M|   0  14.3k
 0    95M|   0  24.3k
 0   113M|   0  28.9k
 0    64M|   0  16.4k
 0    83M|   0  21.0k
 0   107M|   0  27.3k
 0   100M|   0  25.8k
 0    35M|   0  8896         <--- this is the moment when the memory begins to be exhausted
 88k   26M|22.0  6587 
 0    22M|   0  5689 
 0    19M|   0  4705 
 0   276k|   0  69.0 
 0   420k|   0   105 
 0    24k|   0  6.00 
 0    10M|   0  2663 
 0    19M|   0  4900 
 0   592k|   0   148 
 0   824k|   0   206 
 0  1012k|   0   253 
 0  6876k|   0  1719 
 0  9684k|   0  2421 
 0     0 |   0     0 
 0   840k|   0   210 
 0  1460k|   0   365 
 0    15M|   0  3815 
 0  3912k|   0   978  
 0   248k|   0   114 
 0  1896k|   0   422 
 0   120k|   0  30.0 
 0    14M|   0  3698 
 0     0 |   0     0 
 0   392k|   0  98.0 
 0  1792k|   0   448 
 0  1136k|   0   284 
 0    11M|   0  2783 
 0     0 |   0     0 
 0  1340k|   0   335 
 0   344k|   0  86.0 

@behlendorf
Copy link
Contributor

Interesting. It sounds like you're l2arc headers may be consuming the majority of your arc cache. Right now these will not be released except when removing the l2arc device. Dropping the headers during memory pressure would mean we'd be throwing away the references to some data in the l2arc. You can check the l2arc headers field in arcstats to determine how much memory they are using.

@aarcane
Copy link

aarcane commented Sep 17, 2013

Could we implement a max percent on l2arc headers in main arc and implement
indirect pages of headers to be stored on the l2arc service behind a bloom
filter? This would help with large l2arc devices.
On Sep 16, 2013 6:20 PM, "Brian Behlendorf" notifications@github.com
wrote:

Interesting. It sounds like you're l2arc headers may be consuming the
majority of your arc cache. Right now these will not be released except
when removing the l2arc device. Dropping the headers during memory pressure
would mean we'd be throwing away the references to some data in the l2arc.
You can check the l2arc headers field in arcstats to determine how much
memory they are using.


Reply to this email directly or view it on GitHubhttps://github.com//issues/1420#issuecomment-24557810
.

@douardda
Copy link
Author

@behlendorf thanks, I'll try to check the arcstats next time I reproduce a memory pressure situation. Which numbers of the arcstats should I watch precisely?

@behlendorf
Copy link
Contributor

@aarcane We do need to do something here, particularly as l2arc sizes increase. There are some initial patches in #1612 which allow l2 headers to be pitched if there's enough memory pressure.

@douardda The field is 'l2_hdr_size'.

@douardda
Copy link
Author

Some news.

I have upgraded the memory on the machine from 16GB to 32GB. I've reinsterted one of the SSDs dedicated to be a cache device (120GB). There is no special zfs_arc_max configured.

During the WE, I've converted some volumes with a 8k volblocksize to 128K ones (I've just created a new properly configured volume, then dd from the 8k one to the 128k, don't know if threr is a better method). I have very poor performances on these zvol copies (less than 20MB/s).

Now, the server is doing constant (mostly reads) IOs on the zfs disks (while there is almost no activity on the zvols or the zfs filesystems), the load is quite high, and munin report a constant diskstat_utilization of almost 90% on the disks involved in the raidz vdevs; these IOs are not very high in volume, but quite high in IOPS, according the disks are 7k2 organized in raidz1, like:

                               capacity     operations    bandwidth
pool                        alloc   free   read  write   read  write
data                        3,87T  12,4T  2,69K      0  7,97M      0

My zfs munin plugin reports a l2_hdr_size almost constant for hours at 4.55GB (the total l2arc size is around 180GB).

For now, every zpool or zfs command takes 10's of seconds to respond; a strace looks like;

[...]
munmap(0x7f7a8201e000, 4096)            = 0
read(6, "", 4096)                       = 0
close(6)                                = 0
munmap(0x7f7a8201f000, 4096)            = 0
ioctl(3, 0x5a04, 0x7fffe3bd38a0)        = 0
ioctl(3, 0x5a12, 0x7fffe3bd38c0)        = 0
ioctl(3, 0x5a05, 0x7fffe3bcf270)        = 0
brk(0xeef000)                           = 0xeef000
ioctl(3, 0x5a14, 0x7fffe3bd1870)        = 0
ioctl(3, 0x5a14, 0x7fffe3bcb240)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = -1 ENOMEM (Cannot allocate memory)
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = -1 ENOMEM (Cannot allocate memory)
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = -1 ENOMEM (Cannot allocate memory)
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14, 0x7fffe3bc4c10)        = 0
ioctl(3, 0x5a14                                             <-- freeze on these ioctl for one or 2 seconds
[...]
ioctl(3, 0x5a14, 0x7fffe3bd36e0)        = -1 ENOMEM (Cannot allocate memory)
ioctl(3, 0x5a14, 0x7fffe3bd36e0)        = 0
ioctl(3, 0x5a14, 0x7fffe3bcf0c0)        = -1 ESRCH (No such process)
ioctl(3, 0x5a14, 0x7fffe3bd36e0)        = -1 ESRCH (No such process)
fstat(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 8), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f7a8201f000
write(1, "NAME                            "..., 77NAME                                          USED  AVAIL  REFER  MOUNTPOINT) = 77

The system was running with a load level at 16, I have tgtd that is stuck on the D state.

Then I've removed the cache device from the pool:

  • the load is back to normal,
  • IOs are back to 0
  • zfs/zpool commands are back to a normal reactivity
  • and a dd between 2 zvols runs at 82MB/s

So my conclusion so far is that, on my system, adding cache has major performance impact on the system. Which is a little bit odd, isn't it? I could revert to set up a small cache device (using only part of the SSD), but I'm not sure adding a small cache would of any use.

Any clue is welcome,

David

@behlendorf
Copy link
Contributor

@douardda It may be because the memory requirements to track everything in your L2ARC cache device push other useful data out of the primary ARC. Using a smaller device would resolve this. You can verify this is the problem by checking the l2_hdr_size in arcstats. This is roughly the amount of memory consumed managing the L2ARC.

$ grep l2_hdr_size /proc/spl/kstat/zfs/arcstats 
l2_hdr_size                     4    762770920

@douardda
Copy link
Author

Thanks @behlendorf. I have added a munin plugin to monitor the l2arc size (total size and headers), then I have reinserted the cache devices (for a couple of weeks); the l2arc header size remains quite acceptable (around 2Go right now), so the system behaves mostly fine for now.

The problem is that it seems very fragile: it's quite easy to kill the system (allocate and use 4k zvols, or "pathological" IO patterns) by making zfs allocate a huge number of small l2arc blocks that must be tracked in memory (the behaviour you point out here). I don't know how, but there should definitively be some quota somewhere (maybe it cannot be really done without rethinking the memory allocation in zfs to make it more linux friendly) to prevent such a pattern.

@jwiegley
Copy link

I'd like to note that I'm seeing this same speed problem (multi-second pauses in the same types of calls to ioctl), but I have no L2ARC configured in the pool. I have 16G of RAM, of which free is reporting half unused, and yet I still get the same pauses every time I run zpool status. If you'd like me to debug further in this simpler scenario, let me know. The pool is composed of 5 mirrored pairs, ashift=12, with two of the pairs being 3TB drives, and the other three pairs being 2TB drives. Running zpool status often takes upward of 15 seconds to run.

@cburroughs
Copy link
Contributor

I looked but could not find one, is there an upstream illumos ticket/discussion on limiting L2ARC header size somewhere?

@behlendorf
Copy link
Contributor

@cburroughs Not that I'm aware of, but I'm sure they are aware of the issue.

@mailinglists35
Copy link

mailinglists35 commented Apr 24, 2016

I'm getting similar strace and cpu behaviour as described in #1420 (comment) (creating new dataset with zfs create, recursively reading a property with zfs get, mounting/unmounting dataset etc).

Hopefully the root cause is the same and I'm not hijacking this thread.

1Tb mirrored pool with 50% frag and 94% capacity, ~1k datasets each with tens of snapshots, core2duo, 4gb ram, l2arc 1gb on ssd, spl/zfs default parameters
zpool status returns instantly
the difference from that comment is that I get no improvement if I remove the cache device

/proc/spl/kstat/zfs/dmu_tx
5 1 0x01 11 528 3702255793 278790911145179
name type data
dmu_tx_assigned 4 450357
dmu_tx_delay 4 0
dmu_tx_error 4 0
dmu_tx_suspended 4 0
dmu_tx_group 4 0
dmu_tx_memory_reserve 4 0
dmu_tx_memory_reclaim 4 0
dmu_tx_dirty_throttle 4 0
dmu_tx_dirty_delay 4 44337
dmu_tx_dirty_over_max 4 0
dmu_tx_quota 4 0

/proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 3705134230 278831289575797
name type data
hits 4 48898849
misses 4 30851015
demand_data_hits 4 657957
demand_data_misses 4 18896
demand_metadata_hits 4 33375201
demand_metadata_misses 4 30540128
prefetch_data_hits 4 51241
prefetch_data_misses 4 80574
prefetch_metadata_hits 4 14814450
prefetch_metadata_misses 4 211417
mru_hits 4 2740145
mru_ghost_hits 4 114561
mfu_hits 4 32098954
mfu_ghost_hits 4 278355
deleted 4 1180533
mutex_miss 4 6058
evict_skip 4 101397607
evict_not_enough 4 1584496
evict_l2_cached 4 19655065600
evict_l2_eligible 4 7276375552
evict_l2_ineligible 4 5524025344
evict_l2_skip 4 6688
hash_elements 4 222772
hash_elements_max 4 458993
hash_collisions 4 949624
hash_chains 4 35395
hash_chain_max 4 8
p 4 1521930796
c 4 1666021832
c_min 4 33554432
c_max 4 2031970304
size 4 1531454200
hdr_size 4 42136968
data_size 4 0
metadata_size 4 1220733952
other_size 4 256402488
anon_size 4 2508800
anon_evictable_data 4 0
anon_evictable_metadata 4 0
mru_size 4 156073984
mru_evictable_data 4 0
mru_evictable_metadata 4 303104
mru_ghost_size 4 4698112
mru_ghost_evictable_data 4 0
mru_ghost_evictable_metadata 4 4698112
mfu_size 4 1062151168
mfu_evictable_data 4 0
mfu_evictable_metadata 4 995460096
mfu_ghost_size 4 459069440
mfu_ghost_evictable_data 4 0
mfu_ghost_evictable_metadata 4 459069440
l2_hits 4 543010
l2_misses 4 30307963
l2_feeds 4 279023
l2_rw_clash 4 3
l2_read_bytes 4 1981267456
l2_write_bytes 4 3857339904
l2_writes_sent 4 6080
l2_writes_done 4 6080
l2_writes_error 4 0
l2_writes_lock_retry 4 2
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_evict_l1cached 4 20744
l2_free_on_write 4 1775
l2_cdata_free_on_write 4 29
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 5762187264
l2_asize 4 1809572864
l2_hdr_size 4 12180792
l2_compress_successes 4 378819
l2_compress_zeros 4 0
l2_compress_failures 4 31248
memory_throttle_count 4 0
duplicate_buffers 4 0
duplicate_buffers_size 4 0
duplicate_reads 4 0
memory_direct_count 4 327
memory_indirect_count 4 52049
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 3702130
arc_meta_used 4 1531454200
arc_meta_limit 4 1523977728
arc_meta_max 4 1574230968
arc_meta_min 4 16777216
arc_need_free 4 0
arc_sys_free 4 63496192

@behlendorf
Copy link
Contributor

Closing this code has been refactored considerably to reduce the header sizes, in addition the compressed ARC feature was added to further reduce overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Documentation Indicates a requested change to the documentation Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

7 participants