Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce ZFS module parameter l2arc_mfuonly #10710

Merged
merged 1 commit into from
Sep 8, 2020

Conversation

gamanakis
Copy link
Contributor

@gamanakis gamanakis commented Aug 13, 2020

Motivation and Context

Closes #10687

Description

In certain workloads it may be beneficial to avoid caching MRU metadata and data into L2ARC. This commit
introduces a new tunable l2arc_mfuonly for this purpose.

How Has This Been Tested?

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

  • My code follows the ZFS on Linux code style requirements.
  • I have updated the documentation accordingly.
  • I have read the contributing document.
  • I have added tests to cover my changes.
  • I have run the ZFS Test Suite with this change applied.
  • All commit messages are properly formatted and contain Signed-off-by.

@gamanakis gamanakis force-pushed the l2arc_mfuonly branch 2 times, most recently from b67dc59 to 13a9b4d Compare August 13, 2020 18:18
module/zfs/arc.c Outdated
* l2arc_mfuonly : A ZFS module parameter that controls whether only MFU
* metadata and data are cached from ARC into L2ARC. This reduces
* the wear of L2ARC devices and may be beneficial in certain
* workloads.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modern SSDs won't have an endurance issue as cache devices. You need a better reason here.
Also, vaguely referring to "certain workloads" is not helpful for anyone deciding whether or not to enable the feature.

How about adding a method in the man page (missing from this PR!) to describe how to observe
your current workload's MFU size, l2arc feed rate, and eviction rate for eligible blocks as a method
for inferring if MFU-only makes sense for this workload. In other words, if the cache is 2TiB and your MFU is 100MiB, then maybe you don't want to set this parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great suggestion. The more obvious use case would be in case of a zfs send, so that L2ARC space is not wasted by filling it with MRU data/metadata.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richardelling a very simple workload which is impacted by current L2ARC behavior is backup (ie: sequential read) of big files (think about vm disk image): when a L2ARC device is used, the high "churn rate" of ARC MFU means that reading will cause many L2ARC writes, wearing off the device. While it is true that mixed-use enterprise disks have significant endurance rating, one should not be limited to (relatively) expensive disk for an L2ARC device. For a real-world example, I have a server with 2x striped Intel S4601 480 GB as L2ARC and, when copying virtual machine files, I often literally "burn" one entire SSD daily write. While the L2ARC parameters where set to more aggressive level (ie: increased l2arc_headroom and l2arc_write_max), it seems very wrong to wear the SSDs when reading a big file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Document your use case and put some numbers on it. For example, I can't easily google the specs on that drive, but an Intel DC S3520 480 GB SSD has an endurance rating of 945TBW. This equates to a full drive write every day for 5.4 years. NB, since we're measuring actual drive writes, you can just use the measured amount of writes. Also note, the warranty is 5 years, so expect to replace in that timeframe. So now you have your use case workload defined. But that workload may or may not be cause for concern. For your described workload with the above specs, the change is not justified. But for other workloads and specs, it might be justified... help out the user here.

Copy link
Contributor

@shodanshok shodanshok Aug 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richardelling well, if the change only adds a potentially useful option and it is not invasive/difficult to maintain, I think it would be useful. As said above, enterprise SSD are quite durable. However, for L2ARC it shoult be reasonable (for some cases at least) to use cheaper, even consumer grade SSD which, while having lower write performance/endurance, often are quite fast for reads. Finally, consider than having an L2ARC with potentially very useful psedo-MFU data being trashed by a sequential copy with MFU data is going to lower system performance. Giving the user the possibility of doing an informed choice does not seems bad to me, unless it is a maintenance burden.

But hey - I'm all for a better comment/documentation. @gamanakis the use case regarding send/recv seems a very good example.

@gamanakis
Copy link
Contributor Author

gamanakis commented Aug 13, 2020 via email

@gamanakis gamanakis changed the title Introduce ZFS module parameter l2arc_mfuonly [DRAFT] Introduce ZFS module parameter l2arc_mfuonly Aug 13, 2020
@adamdmoss
Copy link
Contributor

Should I be testing this with l2arc_noprefetch=0 ?

@gamanakis
Copy link
Contributor Author

gamanakis commented Aug 14, 2020

Should I be testing this with l2arc_noprefetch=0 ?

No. As far as I can tell from the code l2arc_noprefetch has to do with prefetched/sequential reads being served from L2ARC instead of the disks. It has nothing to do with writing buffers to L2ARC. Meaning prefetched/sequential buffers are written to L2ARC independently of l2arc_noprefetch.

On the other hand l2arc_mfuonly has to do with writing only MFU buffers to L2ARC, as opposed to MFU+MRU (the default).

@codecov
Copy link

codecov bot commented Aug 14, 2020

Codecov Report

Merging #10710 into master will increase coverage by 0.11%.
The diff coverage is 33.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #10710      +/-   ##
==========================================
+ Coverage   79.73%   79.85%   +0.11%     
==========================================
  Files         395      394       -1     
  Lines      125225   124660     -565     
==========================================
- Hits        99854    99548     -306     
+ Misses      25371    25112     -259     
Flag Coverage Δ
#kernel 80.40% <33.33%> (+0.02%) ⬆️
#user 66.00% <33.33%> (+0.53%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
module/zfs/arc.c 89.96% <33.33%> (+0.67%) ⬆️
cmd/zvol_id/zvol_id_main.c 76.31% <0.00%> (-5.27%) ⬇️
module/zfs/vdev_missing.c 60.00% <0.00%> (-3.64%) ⬇️
cmd/ztest/ztest.c 77.99% <0.00%> (-3.20%) ⬇️
module/zfs/zfs_fm.c 84.68% <0.00%> (-2.88%) ⬇️
module/zfs/zio_compress.c 92.30% <0.00%> (-2.24%) ⬇️
module/os/linux/zfs/zpl_super.c 83.96% <0.00%> (-1.13%) ⬇️
module/os/linux/spl/spl-kmem-cache.c 88.10% <0.00%> (-0.91%) ⬇️
module/zfs/vdev_trim.c 95.35% <0.00%> (-0.75%) ⬇️
module/zfs/txg.c 93.96% <0.00%> (-0.73%) ⬇️
... and 106 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 75bf636...e6f5298. Read the comment docs.

@richardelling
Copy link
Contributor

to be more consistent with the other secondary cache properties, this needs to be a dataset property. However, that is more work, so you might get feedback or test results to see if it makes any significant difference.

@adamdmoss
Copy link
Contributor

Should I be testing this with l2arc_noprefetch=0 ?

No. As far as I can tell from the code l2arc_noprefetch has to do with prefetched/sequential reads being served from L2ARC instead of the disks. It has nothing to do with writing buffers to L2ARC.

The painfully inconclusive #10464 suggests the opposite (among many other things at various times 😬 ), i.e. #10464 (comment)

If, in your code exploration, you unravel then truth then it'd be terrific to get the real intended behavior documented once and for all. 😄

@gamanakis
Copy link
Contributor Author

gamanakis commented Aug 14, 2020

The painfully inconclusive #10464 suggests the opposite (among many other things at various times 😬 ), i.e. #10464 (comment)

I do not think that particular comment suggests the opposite. The code and the comment say the same thing:

"l2arc_noprefetch=1, prefetched reads are not issues to L2ARC device."

l2arc_noprefetch has to do with issuing reads to L2ARC device for prefetched buffers. I cannot find any evidence so far in the code, specifically in l2arc_write_buffers(), that l2arc_noprefetch has something to do with writing buffers to L2ARC.

Edit: This is wrong, see next comment.

@gamanakis
Copy link
Contributor Author

gamanakis commented Aug 14, 2020

It seems I am wrong:
In arc_read_done() the flag to store the buffer in L2ARC is cleared if this was a prefetch and if l2arc_noprefetch=1.

Let us continue this conversation in #10464.

@shodanshok
Copy link
Contributor

to be more consistent with the other secondary cache properties, this needs to be a dataset property. However, that is more work, so you might get feedback or test results to see if it makes any significant difference.

@richardelling @gamanakis a dataset property would be great but, as it is much more work, I would suggest going the module parameter route first. Then, if the new mfuonly behavior is useful in the field, it can be eventually converted to a dataset property. If, on the other hand, it show not benefits at all, it can be removed without much drama (dropping a dataset property is an entirely different story).

Just my 2 cents.

@adamdmoss
Copy link
Contributor

(unrelated to my previous comments) - ARC MFU has been broken for a long time and only fixed in the last day or two; has this perhaps been aggravating the original issue which prompted this PR?

Possibly with a re-test it will be revealed that l2arc is now adequately biased towards MFU behavior (because MFU is stickier in ARC too) and a further change is not required.

(I'm not betting on it, but it seems plausible, right?)

@gamanakis
Copy link
Contributor Author

@adamdmoss yes, I saw the relevant commit. Your suggestion is certainly possible.
I plan on leaving this PR as a draft for the time being. People can test it and report their findings.

Regarding the zfs send example I provided previously: it is worth noting again that everything is workload specific. For example if ARC=8GB and L2ARC=64GB, then setting l2arc_mfuonly=1 will surely avoid having the L2ARC filled with MRU zfs send meta/data. However when zfs send completes, the MRU meta/data would still be in ARC, so if the user does l2arc_mfuonly=0 then the L2ARC will cache some MRU meta/data from zfs send still sitting in ARC.

All in all, I am not sure if this PR has any real benefit. I provided it because it was simple enough and wanted to test it myself. In terms of the zfs send above there may be a better solution, though this is an entirely different matter.

@shodanshok
Copy link
Contributor

shodanshok commented Aug 14, 2020

(unrelated to my previous comments) - ARC MFU has been broken for a long time and only fixed in the last day or two

@adamdmoss are you sure? This would be quite surprising. Can yuo share more information? Does the issue only happen with the master brach? EDIT: are you referring to #10548 ?

Possibly with a re-test it will be revealed that l2arc is now adequately biased towards MFU behavior (because MFU is stickier in ARC too) and a further change is not required.

(I'm not betting on it, but it seems plausible, right?)

Maybe: prefetched read are always written in MRU, which will only slowly grow when MFU is big. However, especially when using aggressive L2ARC feed setting (l2arc_headroom and l2arc_write_max), even a small, quickly-chaning MFU will cause significant writes for L2ARC.

@shodanshok
Copy link
Contributor

All in all, I am not sure if this PR has any real benefit. I provided it because it was simple enough and wanted to test it myself. In terms of the zfs send above there may be a better solution, though this is an entirely different matter.

@gamanakis I think the main benefit is to avoid trashing L2ARC when copying large file. After all, L2ARC data are used to increase system performance; if a single, albeit big, copy can entirely trash it, performance will decrease.

@adamdmoss
Copy link
Contributor

(unrelated to my previous comments) - ARC MFU has been broken for a long time and only fixed in the last day or two

@adamdmoss are you sure? This would be quite surprising. Can yuo share more information? Does the issue only happen with the master brach? EDIT: are you referring to #10548 ?

The issue happens with all releases since mid-2016 and is fixed only since e111c80
(And yes it's surprising, but there's no formal automated ARC performance suite.)

Possibly with a re-test it will be revealed that l2arc is now adequately biased towards MFU behavior (because MFU is stickier in ARC too) and a further change is not required.
(I'm not betting on it, but it seems plausible, right?)

Maybe: prefetched read are always written in MRU, which will only slowly grow when MFU is big. However, especially when using aggressive L2ARC feed setting (l2arc_headroom and l2arc_write_max), even a small, quickly-chaning MFU will cause significant writes for L2ARC.

I expect you're right.

@adamdmoss
Copy link
Contributor

All in all, I am not sure if this PR has any real benefit. I provided it because it was simple enough and wanted to test it myself. In terms of the zfs send above there may be a better solution, though this is an entirely different matter.

Oh, I didn't want to imply that - I actually rather like the thinking behind the PR, but the chance has grown to non-zero that the addressed behavior is now addressed in another way. But still not a huge probability if I were to guess.

@adamdmoss
Copy link
Contributor

@gamanakis I think the main benefit is to avoid trashing L2ARC when copying large file. After all, L2ARC data are used to increase system performance; if a single, albeit big, copy can entirely trash it, performance will decrease.

Again I like - and actually want - this PR, so I'm not casting shade on it; I wanted to note that the L2ARC caches writes as well as reads, so it's doubling the rate of unnecessary L2ARC churn when copying big file(s) between two l2arc-backed pools when you don't expect to read those files again soon. (It's semi-trivial to fix this behavior or make it optional - I have toy patches.)

@behlendorf behlendorf added the Status: Work in Progress Not yet ready for general review label Aug 17, 2020
@allanjude
Copy link
Contributor

I feel like the policy control for L2ARC might make more sense as a per-dataset property.

Maybe extending 'secondarycache' from the values of 'all, metadata, none', to be 'all, mfuonly, metadata, none' or something, would make more sense to the user.

@shodanshok
Copy link
Contributor

shodanshok commented Aug 18, 2020

I feel like the policy control for L2ARC might make more sense as a per-dataset property.

Maybe extending 'secondarycache' from the values of 'all, metadata, none', to be 'all, mfuonly, metadata, none' or something, would make more sense to the user.

@allanjude sure, a dataset property would be great. However, it commands more work and it becomes difficult to drop if is, for some reason, we want to get rid of that behavior.

@gamanakis any thoughts on that?

@adamdmoss
Copy link
Contributor

I feel like the policy control for L2ARC might make more sense as a per-dataset property.

Maybe extending 'secondarycache' from the values of 'all, metadata, none', to be 'all, mfuonly, metadata, none' or something, would make more sense to the user.

I agree, though IMHO a module property is okay for a coarse first pass. (Though a valid counterargument is that things which are better as dataset properties but start as module properties, don't often seem to graduate to dataset properties in practice. 😁 )

@allanjude
Copy link
Contributor

I feel like the policy control for L2ARC might make more sense as a per-dataset property.
Maybe extending 'secondarycache' from the values of 'all, metadata, none', to be 'all, mfuonly, metadata, none' or something, would make more sense to the user.

I agree, though IMHO a module property is okay for a coarse first pass. (Though a valid counterargument is that things which are better as dataset properties but start as module properties, don't often seem to graduate to dataset properties in practice. grin )

There is a counter argument, that since this isn't an on-disk change, adding it as a property presents more backwards compatibility issues than using a module property. This is something that we may be able to find to talk about on the leadership call later today.

@gamanakis
Copy link
Contributor Author

I appreciate all your feedback. I just emailed @ahrens to include this as a discussion topic in the leadership meeting. If time does not permit today, perhaps in the next one.

@richardelling
Copy link
Contributor

There is a case here where the actual size of the MFU will decline because we're not caching MRU in L2ARC. Basically, you can't be in MFU until you are in MRU and get hit again (later). Current behaviour changes a block to MFU if it was in MRU, ghost MRU, or L2.

NB, contrary to the name, MFU really means "we re-hit the block some time after it was first hit" where some time is intended to be 62ms.

@adamdmoss
Copy link
Contributor

(IMHO the ARC state change straight to MFU simply because of a L2ARC hit - which may be from an ancient add to L2ARC, especially now that L2ARC is persistent - isn't a terrifically valuable heuristic anyway.)

@shodanshok
Copy link
Contributor

@gamanakis @ahrens thanks for talking about this! I am all for better observability of what is cached on L2ARC, but I find that orthogonal to this PR. For example, we already have the l2arc_noprefetch tunable which changes L2ARC eligibility. Providing the l2arc_mfuonly tunable and disabling it by default would not cause any issue with current workload, but it would useful for workload where the admin know (by measuring with iostat or zpool iostat) that L2ARC is trashed by MRU eviction. Not trashing L2ARC is even more valuable now that we have persistent L2ARC support. As explained above, a very common scenario where L2ARC is constantly trashed is when copying big files (ie: virtual disk images), where even large L2ARC devices can be "wiped" by such trivial copy.

For the record, I measured this exact scenario on one of my customers with (small) hyperconverged setup: the nightly backup basically trash any useful data which were cached on L2ARC.

Am I missing something? Thanks.

@gamanakis
Copy link
Contributor Author

@shodanshok I believe the point is to have direct kstats instead of relying purely on live monitoring with iostat. For the whole zpool case, this should be relatively easy to do, I started working on it. For the 'per ZFS dataset' case that would require more work.

@ahrens
Copy link
Member

ahrens commented Aug 19, 2020

@shodanshok I'm fine with keeping the current behavior by default but adding a new tunable to not feed hit-once data to the L2ARC. With additional data/testing, I'd be OK with changing the default as well, if that makes sense.

@richardelling
Copy link
Contributor

Stats per-dataset doesn't actually work, due to dedup. Similarly, arcstats are system-wide, not per-pool. But, an arcstat is the logical place to put this, and it is likely ok for now because it is still possible to discern what is happening in a test environment (one pool with L2)

@gamanakis
Copy link
Contributor Author

I created draft PR #10743 which introduces L2ARC arcstats according to buffer content type and MFU/MRU status upon caching in L2ARC.

@richardm1
Copy link

For the record, I measured this exact scenario on one of my customers with (small) hyperconverged setup: the nightly backup basically trash any useful data which were cached on L2ARC.

I can speak to another use-case: Storage vMotion. I'm using ZFS exclusively for block storage in a vSphere environment and I can attest that Storage vMotion will "ruin" L2ARC contents. At this very moment I'm sitting here in the middle of a large Storage vMotion watching an l2arc_write_max-sized chunk of data get uselessly stuffed into my L2ARC every second, displacing hard-won random reads that were beneficial to performance. I firmly believe that limiting L2ARC to MFU will improve my hit rate at the end of the day and I look forward trying this out.

@shodanshok
Copy link
Contributor

@richardm1 yeah, basically any big file copy is going to trash the L2ARC

@richardelling
Copy link
Contributor

@richardm1 yeah, basically any big file copy is going to trash the L2ARC

iff the file being copied is not currently in ARC/L2 and is bigger than L2.

Storage vmotion has other pathologies, too. We always set secondarycache=none for storage vmotioned zvols. This worked for us because we had Vmware API integration.

@shodanshok
Copy link
Contributor

shodanshok commented Aug 24, 2020

iff the file being copied is not currently in ARC/L2 and is bigger than L2.

@richardelling Sure, but this is the norm for big file (ie: TB-sized) copies. Moreover, even smaller than L2ARC files can cause significant L2ARC trashing if the copied file is bigger than ARC.

Quoting @richardm1, the key issue is:

watching an l2arc_write_max-sized chunk of data get uselessly stuffed into my L2ARC every second, displacing hard-won random reads that were beneficial to performance

This basically negates a big chunk of L2ARC performance when the workload includes copying big files around. A tunable to control this behavior can be very useful (it may even be beneficial to change the default value, letting L2ARC to only cache MFU blocks, but this clearly is an harder decision to take).

@richardelling
Copy link
Contributor

Also know that if an MFU block is still in ARC and it gets evicted from L2, then it will be re-loaded into L2 by the feed thread. This is why we need the information to know what gets evicted from ARC that is MRU or MFU and eligible for L2.

@shodanshok
Copy link
Contributor

shodanshok commented Sep 4, 2020

@gamanakis @ahrens Any chances to see this patch (with no changes to default behavior) merged for OpenZFS 2.0 release?

@behlendorf
Copy link
Contributor

I'd be fine with adding this module option as long as the default behavior is unchanged and we add it to the module option man page. It's small enough that we could also backport it to the OpenZFS 2.0 release branch.

@gamanakis
Copy link
Contributor Author

@shodanshok I have put all my efforts on #10743 which will give better observability of what resides in L2ARC.
I will find some time to work on this PR and update the man pages and rebase it.

In certain workloads it may be beneficial to reduce wear of L2ARC
devices by not caching MRU metadata and data into L2ARC. This commit
introduces a new tunable l2arc_mfuonly for this purpose.

Signed-off-by: George Amanakis <gamanakis@gmail.com>
@gamanakis
Copy link
Contributor Author

I could add a test after #10743 gets merged.

@gamanakis gamanakis marked this pull request as ready for review September 6, 2020 01:52
@gamanakis gamanakis changed the title [DRAFT] Introduce ZFS module parameter l2arc_mfuonly Introduce ZFS module parameter l2arc_mfuonly Sep 6, 2020
@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Work in Progress Not yet ready for general review labels Sep 8, 2020
@behlendorf
Copy link
Contributor

@gamanakis following up with a test case would be great. I don't think it'll take us long to get #10743 get merged.

@behlendorf behlendorf merged commit feb3a7e into openzfs:master Sep 8, 2020
behlendorf pushed a commit that referenced this pull request Sep 9, 2020
In certain workloads it may be beneficial to reduce wear of L2ARC
devices by not caching MRU metadata and data into L2ARC. This commit
introduces a new tunable l2arc_mfuonly for this purpose.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes #10710
@zfsuser
Copy link

zfsuser commented Sep 17, 2020

To my understanding this improvement (if enabled) defines MRU buffers as non-eligible for L2 caching.

Is this already reflected in the zfs statistics, e.g.

  • arcstat_evict_l2_eligible
  • arcstat_evict_l2_ineligible
  • arcstat_evict_l2_eligible_mfu
  • arcstat_evict_l2_eligible_mru

If not, does it require updating "#define DBUF_IS_L2CACHEABLE(_db)" in dbuf.h?

@gamanakis
Copy link
Contributor Author

gamanakis commented Sep 17, 2020 via email

@gamanakis
Copy link
Contributor Author

I created #10945 to clarify the current behaviour in the man page of l2arc_mfuonly.

@zfsuser
Copy link

zfsuser commented Sep 20, 2020

@gamanakis : Thank you for the explanation and for documenting it.

jsai20 pushed a commit to jsai20/zfs that referenced this pull request Mar 30, 2021
In certain workloads it may be beneficial to reduce wear of L2ARC
devices by not caching MRU metadata and data into L2ARC. This commit
introduces a new tunable l2arc_mfuonly for this purpose.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes openzfs#10710
sempervictus pushed a commit to sempervictus/zfs that referenced this pull request May 31, 2021
In certain workloads it may be beneficial to reduce wear of L2ARC
devices by not caching MRU metadata and data into L2ARC. This commit
introduces a new tunable l2arc_mfuonly for this purpose.

Reviewed-by: Matthew Ahrens <mahrens@delphix.com>
Reviewed-by: Richard Elling <Richard.Elling@RichardElling.com>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: George Amanakis <gamanakis@gmail.com>
Closes openzfs#10710
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Revisit L2ARC functionality
9 participants